[jira] [Commented] (HBASE-10156) Fix up the HBASE-8755 slowdown when low contention

2014-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871777#comment-13871777
 ] 

Hadoop QA commented on HBASE-10156:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12623068/10156v11.txt
  against trunk revision .
  ATTACHMENT ID: 12623068

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 21 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestHLogRecordReader

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.hadoop.hbase.TestAcidGuarantees.testGetAtomicity(TestAcidGuarantees.java:331)
at 
org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnDatanodeDeath(TestLogRolling.java:368)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8434//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8434//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8434//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8434//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8434//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8434//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8434//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8434//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8434//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8434//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8434//console

This message is automatically generated.

> Fix up the HBASE-8755 slowdown when low contention
> --
>
> Key: HBASE-10156
> URL: https://issues.apache.org/jira/browse/HBASE-10156
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: stack
>Assignee: stack
> Attachments: 10156.txt, 10156v10.txt, 10156v11.txt, 10156v2.txt, 
> 10156v3.txt, 10156v4.txt, 10156v5.txt, 10156v6.txt, 10156v7.txt, 10156v9.txt, 
> Disrupting.java
>
>
> HBASE-8755 slows our writes when only a few clients.  Fix.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-6873) Clean up Coprocessor loading failure handling

2014-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871776#comment-13871776
 ] 

Hadoop QA commented on HBASE-6873:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12623067/6873.patch
  against trunk revision .
  ATTACHMENT ID: 12623067

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 30 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8433//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8433//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8433//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8433//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8433//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8433//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8433//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8433//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8433//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8433//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8433//console

This message is automatically generated.

> Clean up Coprocessor loading failure handling
> -
>
> Key: HBASE-6873
> URL: https://issues.apache.org/jira/browse/HBASE-6873
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: David Arthur
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6873.patch, 6873.patch, 6873.patch, 6873.patch, 
> 6873.patch, 6873.patch, 6873.patch
>
>
> When registering a coprocessor with a missing dependency, the regionserver 
> gets stuck in an infinite fail loop. Restarting the regionserver and/or 
> master has no affect.
> E.g., 
> Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) 
> that is not included with HBase.
> {code}
> 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking 
> as FAILED_OPEN in ZK
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open 
> region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_Z

[jira] [Commented] (HBASE-10335) AuthFailedException in zookeeper may block replication forever

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871769#comment-13871769
 ] 

Hudson commented on HBASE-10335:


FAILURE: Integrated in HBase-0.94 #1263 (See 
[https://builds.apache.org/job/HBase-0.94/1263/])
HBASE-10335 AuthFailedException in zookeeper may block replication forever (Liu 
Shaohui) (liangxie: rev 1558304)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/ReplicationZookeeper.java


> AuthFailedException in zookeeper may block replication forever
> --
>
> Key: HBASE-10335
> URL: https://issues.apache.org/jira/browse/HBASE-10335
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, security
>Affects Versions: 0.94.15, 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Blocker
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: HBASE-10335-v1.diff, HBASE-10335-v2.diff
>
>
> ReplicationSource will rechoose sinks when encounted exceptions during 
> skipping edits to the current sink. But if the  zookeeper client for peer 
> cluster go to AUTH_FAILED state, the ReplicationSource will always get  
> AuthFailedException. The ReplicationSource does not reconnect  the peer, 
> because reconnectPeer only handle ConnectionLossException and 
> SessionExpiredException. As a result, the replication will print log: 
> {quote}
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Getting 0 
> rs from peer cluster # 20
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Slave 
> cluster looks down: 20 has 0 region servers
> {quote}
> and be blocked forever.
> I think other places may have same problems for not handling 
> AuthFailedException in zookeeper. eg: HBASE-8675.
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9343) Implement stateless scanner for Stargate

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871766#comment-13871766
 ] 

Andrew Purtell commented on HBASE-9343:
---

bq. Can I add a dependent jira for documentation and old scanner deprecation if 
needed ?
bq. The following API will not work since the same parameters need to be 
specified a differently (as query params) with the new scanner.

Yes, this could possibly go into 0.98 if old APIs and behaviors are deprecated 
in 0.96 and documented as such in the online manual. That would depend on what 
Stack wants to let in. In any case it looks like we could use an update of this 
patch that also includes a new section for the online manual on the difference 
in REST API before and after this patch. That will help us evaluate what 
branches it should ultimately go into.

> Implement stateless scanner for Stargate
> 
>
> Key: HBASE-9343
> URL: https://issues.apache.org/jira/browse/HBASE-9343
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-9343_94.00.patch, HBASE-9343_94.01.patch, 
> HBASE-9343_trunk.00.patch, HBASE-9343_trunk.01.patch, 
> HBASE-9343_trunk.01.patch, HBASE-9343_trunk.02.patch, 
> HBASE-9343_trunk.03.patch, HBASE-9343_trunk.04.patch
>
>
> The current scanner implementation for scanner stores state and hence not 
> very suitable for REST server failure scenarios. The current JIRA proposes to 
> implement a stateless scanner. In the first version of the patch, a new 
> resource class "ScanResource" has been added and all the scan parameters will 
> be specified as query params. 
> The following are the scan parameters
> startrow -  The start row for the scan.
> endrow - The end row for the scan.
> columns - The columns to scan. 
> starttime, endtime - To only retrieve columns within a specific range of 
> version timestamps,both start and end time must be specified.
> maxversions  - To limit the number of versions of each column to be returned.
> batchsize - To limit the maximum number of values returned for each call to 
> next().
> limit - The number of rows to return in the scan operation.
>  More on start row, end row and limit parameters.
> 1. If start row, end row and limit not specified, then the whole table will 
> be scanned.
> 2. If start row and limit (say N) is specified, then the scan operation will 
> return N rows from the start row specified.
> 3. If only limit parameter is specified, then the scan operation will return 
> N rows from the start of the table.
> 4. If limit and end row are specified, then the scan operation will return N 
> rows from start of table till the end row. If the end row is 
> reached before N rows ( say M and M < N ), then M rows will be returned to 
> the user.
> 5. If start row, end row and limit (say N ) are specified and N < number 
> of rows between start row and end row, then N rows from start row
> will be returned to the user. If N > (number of rows between start row and 
> end row (say M), then M number of rows will be returned to the
> user.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10335) AuthFailedException in zookeeper may block replication forever

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871756#comment-13871756
 ] 

Hudson commented on HBASE-10335:


SUCCESS: Integrated in HBase-0.94-security #390 (See 
[https://builds.apache.org/job/HBase-0.94-security/390/])
HBASE-10335 AuthFailedException in zookeeper may block replication forever (Liu 
Shaohui) (liangxie: rev 1558304)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/replication/ReplicationZookeeper.java


> AuthFailedException in zookeeper may block replication forever
> --
>
> Key: HBASE-10335
> URL: https://issues.apache.org/jira/browse/HBASE-10335
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, security
>Affects Versions: 0.94.15, 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Blocker
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: HBASE-10335-v1.diff, HBASE-10335-v2.diff
>
>
> ReplicationSource will rechoose sinks when encounted exceptions during 
> skipping edits to the current sink. But if the  zookeeper client for peer 
> cluster go to AUTH_FAILED state, the ReplicationSource will always get  
> AuthFailedException. The ReplicationSource does not reconnect  the peer, 
> because reconnectPeer only handle ConnectionLossException and 
> SessionExpiredException. As a result, the replication will print log: 
> {quote}
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Getting 0 
> rs from peer cluster # 20
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Slave 
> cluster looks down: 20 has 0 region servers
> {quote}
> and be blocked forever.
> I think other places may have same problems for not handling 
> AuthFailedException in zookeeper. eg: HBASE-8675.
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871753#comment-13871753
 ] 

Andrew Purtell commented on HBASE-10322:


Finally, the reason I say "rock bottom simplest way" is there is too much 
discussion on this issue and it is holding up the RC essentially. Let's move 
this over to reviewboard so we have code to get us all on the same page.

> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871753#comment-13871753
 ] 

Andrew Purtell edited comment on HBASE-10322 at 1/15/14 7:18 AM:
-

Finally, the reason I say "rock bottom simplest way" is there is too much 
discussion on this issue and it is holding up the RC essentially. Talk of 
negotiation is totally out of scope at this point. Let's move this over to 
reviewboard so we have code to get us all on the same page.


was (Author: apurtell):
Finally, the reason I say "rock bottom simplest way" is there is too much 
discussion on this issue and it is holding up the RC essentially. Let's move 
this over to reviewboard so we have code to get us all on the same page.

> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871750#comment-13871750
 ] 

Andrew Purtell commented on HBASE-10322:


bq. Are we writing new KVs or creating a cell block? If the latter, then it'll 
be no more expensive copying a KV with or without the Tags?

This is my assumption too. Why is this wrong.

> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871745#comment-13871745
 ] 

Andrew Purtell commented on HBASE-10322:


bq. So export tool will not be working with this. Correct?

No, that is not what I said.

I said: The rock bottom simplest way to do this is to just not support tags in 
RPC codecs. Maybe we can have a separate class that keeps them for the Export 
tool specifically? Import is no problem if the user, presumably privileged, is 
building HFiles and therefore the cells within them directly. Accumulo has the 
same approach to whole file imports - no checking done, YMMV.



> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10329) Fail the writes rather than proceeding silently to prevent data loss when AsyncSyncer encounters null writer and its writes aren't synced by other Asyncer

2014-01-14 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871743#comment-13871743
 ] 

Feng Honghua commented on HBASE-10329:
--

v2 patch is based on v0 since v0 has already been submitted to trunk

> Fail the writes rather than proceeding silently to prevent data loss when 
> AsyncSyncer encounters null writer and its writes aren't synced by other 
> Asyncer
> --
>
> Key: HBASE-10329
> URL: https://issues.apache.org/jira/browse/HBASE-10329
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, wal
>Affects Versions: 0.98.0
>Reporter: Feng Honghua
>Assignee: Feng Honghua
>Priority: Critical
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 10329-0.98.txt, HBASE-10329-trunk_v0.patch, 
> HBASE-10329-trunk_v2.patch
>
>
> Last month after I introduced multiple AsyncSyncer threads to improve the 
> throughput for lower number client write threads, [~stack] encountered a NPE 
> while doing the test where null-writer occurs in AsyncSyncer when doing sync. 
> Since we have run many times test in cluster to verify the throughput 
> improvement, and never encountered such NPE, it really confused me. (and 
> [~stack] fixed this by adding 'if (writer != null)' to protect the sync 
> operation)
> These days from time to time I wondered why the writer can be null in 
> AsyncSyncer and whether it's safe to fix it by just adding a null checking 
> before doing sync, as [~stack] did. After some digging, I find out the case 
> where AsyncSyncer can encounter null-writer, it is as below:
> 1. t1: AsyncWriter appends writes to hdfs, triggers AsyncSyncer 1 with 
> writtenTxid==100
> 2. t2: AsyncWriter appends writes to hdfs, triggers AsyncSyncer 2 with 
> writtenTxid==200
> 3. t3: rollWriter starts, it grabs the updateLock to prevents further writes 
> from client writes to enter pendingWrites, and then waits for all items(<= 
> 200) in pendingWrites to append and finally sync to hdfs
> 4. t4: AsyncSyncer 2 finishes, now syncedTillHere==200(it also help sync 
> <=100 as a whole)
> 5. t5: rollWriter now can close writer, set writer=null...
> 6. t6: AsyncSyncer 1 starts to do sync and finds the writer is null... before 
> rollWriter sets writer to the newly rolled Writer
> We can see:
> 1. the null writer is possible only after there are multiple AsyncSyncer 
> threads, that's why we never encountered it before introducing multiple 
> AsyncSyncer threads.
> 2. since rollWriter can set writer=null only after all items of pendingWrites 
> sync to hdfs, and AsyncWriter is in the critical path of this task and there 
> is only one single AsyncWriter thread, so AsyncWriter can't encounter null 
> writer, that's why we never encounter null writer in AsyncWriter though it 
> also uses writer. This is the same reason as why null-writer never occurs 
> when there is a single AsyncSyncer thread.
> And we should treat differently when writer == null in AsyncSyncer:
> 1. if txidToSync <= syncedTillHere, this means all writes this AsyncSyncer 
> care about have already been synced by other AsyncSyncer, we can safely 
> ignore sync(as [~stack] does here);
> 2. if txidToSync > syncedTillHere, we need fail all the writes with txid <= 
> txidToSync to avoid data loss: user gets successful write response but can't 
> read out the writes after getting the successful write response, from user's 
> perspective this is data loss (according to above analysis, such case should 
> not occur, but we still should add such defensive treatment to prevent data 
> loss if it really occurs, such as by some bug introduced later)
> also fix the bug where isSyncing needs to reset to false when writer.sync 
> encounters IOException: AsyncSyncer swallows such exception by failing all 
> writes with txid<=txidToSync, and this AsyncSyncer thread is now ready to do 
> later sync, its isSyncing needs to be reset to false in the IOException 
> handling block, otherwise it can't be selected by AsyncWriter to do sync



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10329) Fail the writes rather than proceeding silently to prevent data loss when AsyncSyncer encounters null writer and its writes aren't synced by other Asyncer

2014-01-14 Thread Feng Honghua (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Feng Honghua updated HBASE-10329:
-

Attachment: HBASE-10329-trunk_v2.patch

A further improvement can be done here inspired by the finding that an 
AsyncSyncer with greater txid can finish(and help sync) before AsyncSyncer with 
smaller txid starts to sync, we don't sync(also don't notify AsyncNotifier and 
check logroll) if AsyncSyncer's writes have already been sync-ed by other 
AsyncSyncer. We can have some performance gain since sync(hit hdfs) and 
notify(wake up another thread) is comparably heavy operations.

We already avoid such needless sync/notify/logrollCheck for case where 
writer==null(as in patch for this jira), but we can treat the same way for case 
where writer!=null, and we can have more performance gain since typically 
writer is not null.

According patch is attached

> Fail the writes rather than proceeding silently to prevent data loss when 
> AsyncSyncer encounters null writer and its writes aren't synced by other 
> Asyncer
> --
>
> Key: HBASE-10329
> URL: https://issues.apache.org/jira/browse/HBASE-10329
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, wal
>Affects Versions: 0.98.0
>Reporter: Feng Honghua
>Assignee: Feng Honghua
>Priority: Critical
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 10329-0.98.txt, HBASE-10329-trunk_v0.patch, 
> HBASE-10329-trunk_v2.patch
>
>
> Last month after I introduced multiple AsyncSyncer threads to improve the 
> throughput for lower number client write threads, [~stack] encountered a NPE 
> while doing the test where null-writer occurs in AsyncSyncer when doing sync. 
> Since we have run many times test in cluster to verify the throughput 
> improvement, and never encountered such NPE, it really confused me. (and 
> [~stack] fixed this by adding 'if (writer != null)' to protect the sync 
> operation)
> These days from time to time I wondered why the writer can be null in 
> AsyncSyncer and whether it's safe to fix it by just adding a null checking 
> before doing sync, as [~stack] did. After some digging, I find out the case 
> where AsyncSyncer can encounter null-writer, it is as below:
> 1. t1: AsyncWriter appends writes to hdfs, triggers AsyncSyncer 1 with 
> writtenTxid==100
> 2. t2: AsyncWriter appends writes to hdfs, triggers AsyncSyncer 2 with 
> writtenTxid==200
> 3. t3: rollWriter starts, it grabs the updateLock to prevents further writes 
> from client writes to enter pendingWrites, and then waits for all items(<= 
> 200) in pendingWrites to append and finally sync to hdfs
> 4. t4: AsyncSyncer 2 finishes, now syncedTillHere==200(it also help sync 
> <=100 as a whole)
> 5. t5: rollWriter now can close writer, set writer=null...
> 6. t6: AsyncSyncer 1 starts to do sync and finds the writer is null... before 
> rollWriter sets writer to the newly rolled Writer
> We can see:
> 1. the null writer is possible only after there are multiple AsyncSyncer 
> threads, that's why we never encountered it before introducing multiple 
> AsyncSyncer threads.
> 2. since rollWriter can set writer=null only after all items of pendingWrites 
> sync to hdfs, and AsyncWriter is in the critical path of this task and there 
> is only one single AsyncWriter thread, so AsyncWriter can't encounter null 
> writer, that's why we never encounter null writer in AsyncWriter though it 
> also uses writer. This is the same reason as why null-writer never occurs 
> when there is a single AsyncSyncer thread.
> And we should treat differently when writer == null in AsyncSyncer:
> 1. if txidToSync <= syncedTillHere, this means all writes this AsyncSyncer 
> care about have already been synced by other AsyncSyncer, we can safely 
> ignore sync(as [~stack] does here);
> 2. if txidToSync > syncedTillHere, we need fail all the writes with txid <= 
> txidToSync to avoid data loss: user gets successful write response but can't 
> read out the writes after getting the successful write response, from user's 
> perspective this is data loss (according to above analysis, such case should 
> not occur, but we still should add such defensive treatment to prevent data 
> loss if it really occurs, such as by some bug introduced later)
> also fix the bug where isSyncing needs to reset to false when writer.sync 
> encounters IOException: AsyncSyncer swallows such exception by failing all 
> writes with txid<=txidToSync, and this AsyncSyncer thread is now ready to do 
> later sync, its isSyncing needs to be reset to false in the IOException 
> handling block, otherwise it can't be selected by AsyncWriter to do sync



--
This message was sent by Atl

[jira] [Updated] (HBASE-10342) RowKey Prefix Bloom Filter

2014-01-14 Thread Liyin Tang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liyin Tang updated HBASE-10342:
---

Description: When designing HBase schema for some use cases, it is quite 
common to combine multiple information within the RowKey. For instance, 
assuming that rowkey is constructed as md5(id1) + id1 + id2, and user wants to 
scan all the rowkeys which starting by id1. In such case, the rowkey bloom 
filter is able to cut more unnecessary seeks during the scan.  (was: When 
designing HBase schema for some use cases, it is quite common to combine 
multiple information within the RowKey. For instance, assuming that rowkey is 
constructed as md5(id1) + id1 + id2, and user wants to scan all the rowkeys 
which starting at id1 . In such case, the rowkey bloom filter is able to cut 
more unnecessary seeks during the scan.)

> RowKey Prefix Bloom Filter
> --
>
> Key: HBASE-10342
> URL: https://issues.apache.org/jira/browse/HBASE-10342
> Project: HBase
>  Issue Type: New Feature
>Reporter: Liyin Tang
>
> When designing HBase schema for some use cases, it is quite common to combine 
> multiple information within the RowKey. For instance, assuming that rowkey is 
> constructed as md5(id1) + id1 + id2, and user wants to scan all the rowkeys 
> which starting by id1. In such case, the rowkey bloom filter is able to cut 
> more unnecessary seeks during the scan.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10342) RowKey Prefix Bloom Filter

2014-01-14 Thread Liyin Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871725#comment-13871725
 ] 

Liyin Tang commented on HBASE-10342:


Yes, a prefix-hash memstore will help this case as well ! It is definitely 
worth benchmarking.
 

> RowKey Prefix Bloom Filter
> --
>
> Key: HBASE-10342
> URL: https://issues.apache.org/jira/browse/HBASE-10342
> Project: HBase
>  Issue Type: New Feature
>Reporter: Liyin Tang
>
> When designing HBase schema for some use cases, it is quite common to combine 
> multiple information within the RowKey. For instance, assuming that rowkey is 
> constructed as md5(id1) + id1 + id2, and user wants to scan all the rowkeys 
> which starting at id1 . In such case, the rowkey bloom filter is able to cut 
> more unnecessary seeks during the scan.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10275) [89-fb] Guarantee the sequenceID in each Region is strictly monotonic increasing

2014-01-14 Thread Liyin Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871719#comment-13871719
 ] 

Liyin Tang commented on HBASE-10275:


HBASE-10343 might resolve this issue in much easier way.

> [89-fb] Guarantee the sequenceID in each Region is strictly monotonic 
> increasing
> 
>
> Key: HBASE-10275
> URL: https://issues.apache.org/jira/browse/HBASE-10275
> Project: HBase
>  Issue Type: New Feature
>Reporter: Liyin Tang
>Assignee: Liyin Tang
>
> [HBASE-8741] has implemented the per-region sequence ID. It would be even 
> better to guarantee that the sequencing is strictly monotonic increasing so 
> that HLog-Based Async Replication is able to delivery transactions in order 
> in the case of region movements.  



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10335) AuthFailedException in zookeeper may block replication forever

2014-01-14 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie updated HBASE-10335:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> AuthFailedException in zookeeper may block replication forever
> --
>
> Key: HBASE-10335
> URL: https://issues.apache.org/jira/browse/HBASE-10335
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, security
>Affects Versions: 0.94.15, 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Blocker
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: HBASE-10335-v1.diff, HBASE-10335-v2.diff
>
>
> ReplicationSource will rechoose sinks when encounted exceptions during 
> skipping edits to the current sink. But if the  zookeeper client for peer 
> cluster go to AUTH_FAILED state, the ReplicationSource will always get  
> AuthFailedException. The ReplicationSource does not reconnect  the peer, 
> because reconnectPeer only handle ConnectionLossException and 
> SessionExpiredException. As a result, the replication will print log: 
> {quote}
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Getting 0 
> rs from peer cluster # 20
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Slave 
> cluster looks down: 20 has 0 region servers
> {quote}
> and be blocked forever.
> I think other places may have same problems for not handling 
> AuthFailedException in zookeeper. eg: HBASE-8675.
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10335) AuthFailedException in zookeeper may block replication forever

2014-01-14 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871716#comment-13871716
 ] 

Liang Xie commented on HBASE-10335:
---

Integrated into 0.94/0.96/0.98/trunk, thanks for the patch:)

> AuthFailedException in zookeeper may block replication forever
> --
>
> Key: HBASE-10335
> URL: https://issues.apache.org/jira/browse/HBASE-10335
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, security
>Affects Versions: 0.94.15, 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Blocker
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: HBASE-10335-v1.diff, HBASE-10335-v2.diff
>
>
> ReplicationSource will rechoose sinks when encounted exceptions during 
> skipping edits to the current sink. But if the  zookeeper client for peer 
> cluster go to AUTH_FAILED state, the ReplicationSource will always get  
> AuthFailedException. The ReplicationSource does not reconnect  the peer, 
> because reconnectPeer only handle ConnectionLossException and 
> SessionExpiredException. As a result, the replication will print log: 
> {quote}
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Getting 0 
> rs from peer cluster # 20
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Slave 
> cluster looks down: 20 has 0 region servers
> {quote}
> and be blocked forever.
> I think other places may have same problems for not handling 
> AuthFailedException in zookeeper. eg: HBASE-8675.
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10323) Auto detect data block encoding in HFileOutputFormat

2014-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871712#comment-13871712
 ] 

Hadoop QA commented on HBASE-10323:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12623052/HBASE_10323-trunk-v3.patch
  against trunk revision .
  ATTACHMENT ID: 12623052

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8432//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8432//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8432//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8432//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8432//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8432//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8432//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8432//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8432//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8432//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8432//console

This message is automatically generated.

> Auto detect data block encoding in HFileOutputFormat
> 
>
> Key: HBASE-10323
> URL: https://issues.apache.org/jira/browse/HBASE-10323
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ishan Chhabra
>Assignee: Ishan Chhabra
> Fix For: 0.99.0
>
> Attachments: HBASE_10323-0.94.15-v1.patch, 
> HBASE_10323-0.94.15-v2.patch, HBASE_10323-0.94.15-v3.patch, 
> HBASE_10323-0.94.15-v4.patch, HBASE_10323-trunk-v1.patch, 
> HBASE_10323-trunk-v2.patch, HBASE_10323-trunk-v3.patch
>
>
> Currently, one has to specify the data block encoding of the table explicitly 
> using the config parameter 
> "hbase.mapreduce.hfileoutputformat.datablock.encoding" when doing a bulkload 
> load. This option is easily missed, not documented and also works differently 
> than compression, block size and bloom filter type, which are auto detected. 
> The solution would be to add support to auto detect datablock encoding 
> similar to other parameters. 
> The current patch does the following:
> 1. Automatically detects datablock encoding in HFileOutputFormat.
> 2. Keeps the legacy option of manually specifying the datablock encoding
> around as a method to override auto detections.
> 3. Moves string conf parsing to the start of the program so that it fails
> fast during starting up instead of failing during record writes. It also
> makes the internals of the program type safe.
> 4. Adds missing doc strings and unit tests for code serializing and
> deserializing config paramerters for bloom filer type, block size and
> datablock encoding.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10343) Write the last sequence id into the HLog during the RegionOpen time

2014-01-14 Thread Liyin Tang (JIRA)
Liyin Tang created HBASE-10343:
--

 Summary: Write the last sequence id into the HLog during the 
RegionOpen time
 Key: HBASE-10343
 URL: https://issues.apache.org/jira/browse/HBASE-10343
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang


HLog based async replication has a challenging to guarantee the in-order 
delivery when the Region is moving from one HLog stream to another HLog stream. 

One approach is to keep the last_sequence_id in the new HLog stream when 
opening the Region. So the replication framework is able to catch upto the 
last_sequence_id from the previous HLog stream, before replicating any new 
transactions through the new HLog stream.





--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10342) RowKey Prefix Bloom Filter

2014-01-14 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871709#comment-13871709
 ] 

Liang Xie commented on HBASE-10342:
---

i thought it several weeks before as well after read the rocksdb's doc,  good 
stuff, liyin!
Another issue we could file combined with current one is to support pluggable 
memstore impl, such that we could introduce a prefix-hash memstore, it'll more 
efficient under scan + prefix filter scenario.

> RowKey Prefix Bloom Filter
> --
>
> Key: HBASE-10342
> URL: https://issues.apache.org/jira/browse/HBASE-10342
> Project: HBase
>  Issue Type: New Feature
>Reporter: Liyin Tang
>
> When designing HBase schema for some use cases, it is quite common to combine 
> multiple information within the RowKey. For instance, assuming that rowkey is 
> constructed as md5(id1) + id1 + id2, and user wants to scan all the rowkeys 
> which starting at id1 . In such case, the rowkey bloom filter is able to cut 
> more unnecessary seeks during the scan.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9343) Implement stateless scanner for Stargate

2014-01-14 Thread Vandana Ayyalasomayajula (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871706#comment-13871706
 ] 

Vandana Ayyalasomayajula commented on HBASE-9343:
-

The following API will not work since the same parameters need to be specified 
a differently (as query params) with the new scanner. 
{quote}
GET //*/(  ( :  )?
  ( ,  ( :  )? )+ )?
( / (  ',' )?  )? )?
  ( ?v=  )?
{quote}

> Implement stateless scanner for Stargate
> 
>
> Key: HBASE-9343
> URL: https://issues.apache.org/jira/browse/HBASE-9343
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-9343_94.00.patch, HBASE-9343_94.01.patch, 
> HBASE-9343_trunk.00.patch, HBASE-9343_trunk.01.patch, 
> HBASE-9343_trunk.01.patch, HBASE-9343_trunk.02.patch, 
> HBASE-9343_trunk.03.patch, HBASE-9343_trunk.04.patch
>
>
> The current scanner implementation for scanner stores state and hence not 
> very suitable for REST server failure scenarios. The current JIRA proposes to 
> implement a stateless scanner. In the first version of the patch, a new 
> resource class "ScanResource" has been added and all the scan parameters will 
> be specified as query params. 
> The following are the scan parameters
> startrow -  The start row for the scan.
> endrow - The end row for the scan.
> columns - The columns to scan. 
> starttime, endtime - To only retrieve columns within a specific range of 
> version timestamps,both start and end time must be specified.
> maxversions  - To limit the number of versions of each column to be returned.
> batchsize - To limit the maximum number of values returned for each call to 
> next().
> limit - The number of rows to return in the scan operation.
>  More on start row, end row and limit parameters.
> 1. If start row, end row and limit not specified, then the whole table will 
> be scanned.
> 2. If start row and limit (say N) is specified, then the scan operation will 
> return N rows from the start row specified.
> 3. If only limit parameter is specified, then the scan operation will return 
> N rows from the start of the table.
> 4. If limit and end row are specified, then the scan operation will return N 
> rows from start of table till the end row. If the end row is 
> reached before N rows ( say M and M < N ), then M rows will be returned to 
> the user.
> 5. If start row, end row and limit (say N ) are specified and N < number 
> of rows between start row and end row, then N rows from start row
> will be returned to the user. If N > (number of rows between start row and 
> end row (say M), then M number of rows will be returned to the
> user.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871704#comment-13871704
 ] 

ramkrishna.s.vasudevan commented on HBASE-10322:


Ok.. So considering KVCodec as the default - we will create StripTagKVCodec and 
on the server side we would instantiate the StripTagKVCodec and keep using it 
and on client it would be KVCodec.  So export tool will not be working with 
this. Correct?
[~anoop.hbase],[~apurtell]
Thoughts?

> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-6873) Clean up Coprocessor loading failure handling

2014-01-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-6873:
--

Status: Open  (was: Patch Available)

> Clean up Coprocessor loading failure handling
> -
>
> Key: HBASE-6873
> URL: https://issues.apache.org/jira/browse/HBASE-6873
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: David Arthur
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6873.patch, 6873.patch, 6873.patch, 6873.patch, 
> 6873.patch, 6873.patch, 6873.patch
>
>
> When registering a coprocessor with a missing dependency, the regionserver 
> gets stuck in an infinite fail loop. Restarting the regionserver and/or 
> master has no affect.
> E.g., 
> Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) 
> that is not included with HBase.
> {code}
> 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking 
> as FAILED_OPEN in ZK
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open 
> region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG regionserver.HRegion: Opening region: {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,}
> 12/09/24 13:13:15 INFO regionserver.HRegion: Setting up tabledescriptor 
> config now ...
> 12/09/24 13:13:15 INFO coprocessor.CoprocessorHost: Class 
> com.mycompany.hbase.documents.DocumentObserverCoprocessor needs to be loaded 
> from a file - file:/path/to/my-coproc.jar.
> 12/09/24 13:13:16 ERROR handler.OpenRegionHandler: Failed open of 
> region=documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b., starting 
> to roll back the global memstore size.
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3595)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3733)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:680)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3592)
>   ... 7 more
> Caused by: java.lang.NoClassDefFoundError: 
> kafka/common/NoBrokersForPartitionException
>   at java.lang.Class.getDeclaredConstructors0(Native Method)
>   at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389)
>   at java.lang.Class.getConstructor0(Class.java:2699)
>   at java.lang.Class.newInstance0(Class.java:326)
>   at java.lang.Class.newInstance(Class.java:308)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:254)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:227)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCopro

[jira] [Updated] (HBASE-6873) Clean up Coprocessor loading failure handling

2014-01-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-6873:
--

Status: Patch Available  (was: Open)

> Clean up Coprocessor loading failure handling
> -
>
> Key: HBASE-6873
> URL: https://issues.apache.org/jira/browse/HBASE-6873
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: David Arthur
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6873.patch, 6873.patch, 6873.patch, 6873.patch, 
> 6873.patch, 6873.patch, 6873.patch
>
>
> When registering a coprocessor with a missing dependency, the regionserver 
> gets stuck in an infinite fail loop. Restarting the regionserver and/or 
> master has no affect.
> E.g., 
> Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) 
> that is not included with HBase.
> {code}
> 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking 
> as FAILED_OPEN in ZK
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open 
> region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG regionserver.HRegion: Opening region: {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,}
> 12/09/24 13:13:15 INFO regionserver.HRegion: Setting up tabledescriptor 
> config now ...
> 12/09/24 13:13:15 INFO coprocessor.CoprocessorHost: Class 
> com.mycompany.hbase.documents.DocumentObserverCoprocessor needs to be loaded 
> from a file - file:/path/to/my-coproc.jar.
> 12/09/24 13:13:16 ERROR handler.OpenRegionHandler: Failed open of 
> region=documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b., starting 
> to roll back the global memstore size.
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3595)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3733)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:680)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3592)
>   ... 7 more
> Caused by: java.lang.NoClassDefFoundError: 
> kafka/common/NoBrokersForPartitionException
>   at java.lang.Class.getDeclaredConstructors0(Native Method)
>   at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389)
>   at java.lang.Class.getConstructor0(Class.java:2699)
>   at java.lang.Class.newInstance0(Class.java:326)
>   at java.lang.Class.newInstance(Class.java:308)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:254)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:227)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCopro

[jira] [Updated] (HBASE-10156) Fix up the HBASE-8755 slowdown when low contention

2014-01-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-10156:
--

Attachment: 10156v11.txt

Fix javadoc.

> Fix up the HBASE-8755 slowdown when low contention
> --
>
> Key: HBASE-10156
> URL: https://issues.apache.org/jira/browse/HBASE-10156
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: stack
>Assignee: stack
> Attachments: 10156.txt, 10156v10.txt, 10156v11.txt, 10156v2.txt, 
> 10156v3.txt, 10156v4.txt, 10156v5.txt, 10156v6.txt, 10156v7.txt, 10156v9.txt, 
> Disrupting.java
>
>
> HBASE-8755 slows our writes when only a few clients.  Fix.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-6873) Clean up Coprocessor loading failure handling

2014-01-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-6873:
--

Attachment: 6873.patch

The TestShell failure was real:
{noformat}
2014-01-14 21:56:49,890 DEBUG [RS_OPEN_REGION-acer:48810-1] 
coprocessor.CoprocessorHost(189): Loading coprocessor class 
SimpleRegionObserver with path hdfs:/foo.jar and priority 12
2014-01-14 21:56:49,896 ERROR [RS_OPEN_REGION-acer:48810-1] 
coprocessor.CoprocessorHost(771): The coprocessor SimpleRegionObserver threw an 
unexpected exception
java.io.FileNotFoundException: File does not exist: hdfs:/foo.jar
{noformat}

One of the shell tests wants to set a fake coprocessor attribute to prove that 
it can do it.

Attaching new patch.

> Clean up Coprocessor loading failure handling
> -
>
> Key: HBASE-6873
> URL: https://issues.apache.org/jira/browse/HBASE-6873
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: David Arthur
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6873.patch, 6873.patch, 6873.patch, 6873.patch, 
> 6873.patch, 6873.patch, 6873.patch
>
>
> When registering a coprocessor with a missing dependency, the regionserver 
> gets stuck in an infinite fail loop. Restarting the regionserver and/or 
> master has no affect.
> E.g., 
> Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) 
> that is not included with HBase.
> {code}
> 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking 
> as FAILED_OPEN in ZK
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open 
> region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG regionserver.HRegion: Opening region: {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,}
> 12/09/24 13:13:15 INFO regionserver.HRegion: Setting up tabledescriptor 
> config now ...
> 12/09/24 13:13:15 INFO coprocessor.CoprocessorHost: Class 
> com.mycompany.hbase.documents.DocumentObserverCoprocessor needs to be loaded 
> from a file - file:/path/to/my-coproc.jar.
> 12/09/24 13:13:16 ERROR handler.OpenRegionHandler: Failed open of 
> region=documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b., starting 
> to roll back the global memstore size.
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3595)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3733)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:680)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3592)
>   ... 7 more
> Caused by: java.lang.NoClassDefFoundError: 
> kafka/common/NoBrokersForPartitionException
>  

[jira] [Commented] (HBASE-10335) AuthFailedException in zookeeper may block replication forever

2014-01-14 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871699#comment-13871699
 ] 

Liu Shaohui commented on HBASE-10335:
-

[~lhofhansl]
It is just to debug other exceptions.

> AuthFailedException in zookeeper may block replication forever
> --
>
> Key: HBASE-10335
> URL: https://issues.apache.org/jira/browse/HBASE-10335
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, security
>Affects Versions: 0.94.15, 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Blocker
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: HBASE-10335-v1.diff, HBASE-10335-v2.diff
>
>
> ReplicationSource will rechoose sinks when encounted exceptions during 
> skipping edits to the current sink. But if the  zookeeper client for peer 
> cluster go to AUTH_FAILED state, the ReplicationSource will always get  
> AuthFailedException. The ReplicationSource does not reconnect  the peer, 
> because reconnectPeer only handle ConnectionLossException and 
> SessionExpiredException. As a result, the replication will print log: 
> {quote}
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Getting 0 
> rs from peer cluster # 20
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Slave 
> cluster looks down: 20 has 0 region servers
> {quote}
> and be blocked forever.
> I think other places may have same problems for not handling 
> AuthFailedException in zookeeper. eg: HBASE-8675.
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10339) Mutation::getFamilyMap method was lost in 98

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871697#comment-13871697
 ] 

Hudson commented on HBASE-10339:


SUCCESS: Integrated in HBase-0.98 #82 (See 
[https://builds.apache.org/job/HBase-0.98/82/])
HBASE-10339 Mutation::getFamilyMap method was lost in 98 (sershe: rev 1558268)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java


> Mutation::getFamilyMap method was lost in 98
> 
>
> Key: HBASE-10339
> URL: https://issues.apache.org/jira/browse/HBASE-10339
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.99.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10339.patch
>
>
> When backward compat work was done in several jiras, this method was missed. 
> First the return type was changed, then the method was rename to not break 
> the callers via new return type, but the legacy method was never re-added as 
> far as I see



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871696#comment-13871696
 ] 

stack commented on HBASE-10322:
---

Well, for 0.98, we could even  hardcode it given we have one Codec only at this 
time?

> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9343) Implement stateless scanner for Stargate

2014-01-14 Thread Vandana Ayyalasomayajula (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871695#comment-13871695
 ] 

Vandana Ayyalasomayajula commented on HBASE-9343:
-

[~ndimiduk] I added testSuffixGlobbingXML test in TestGetAndPutResource to make 
sure existing row based suffix-globbing behavior is consistent. As mentioned in 
the above document, it will boil down to scanner with prefix filter. 
Can I add a dependent jira for documentation and old scanner deprecation if 
needed ?
Thanks all for quick reviews. 

> Implement stateless scanner for Stargate
> 
>
> Key: HBASE-9343
> URL: https://issues.apache.org/jira/browse/HBASE-9343
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-9343_94.00.patch, HBASE-9343_94.01.patch, 
> HBASE-9343_trunk.00.patch, HBASE-9343_trunk.01.patch, 
> HBASE-9343_trunk.01.patch, HBASE-9343_trunk.02.patch, 
> HBASE-9343_trunk.03.patch, HBASE-9343_trunk.04.patch
>
>
> The current scanner implementation for scanner stores state and hence not 
> very suitable for REST server failure scenarios. The current JIRA proposes to 
> implement a stateless scanner. In the first version of the patch, a new 
> resource class "ScanResource" has been added and all the scan parameters will 
> be specified as query params. 
> The following are the scan parameters
> startrow -  The start row for the scan.
> endrow - The end row for the scan.
> columns - The columns to scan. 
> starttime, endtime - To only retrieve columns within a specific range of 
> version timestamps,both start and end time must be specified.
> maxversions  - To limit the number of versions of each column to be returned.
> batchsize - To limit the maximum number of values returned for each call to 
> next().
> limit - The number of rows to return in the scan operation.
>  More on start row, end row and limit parameters.
> 1. If start row, end row and limit not specified, then the whole table will 
> be scanned.
> 2. If start row and limit (say N) is specified, then the scan operation will 
> return N rows from the start row specified.
> 3. If only limit parameter is specified, then the scan operation will return 
> N rows from the start of the table.
> 4. If limit and end row are specified, then the scan operation will return N 
> rows from start of table till the end row. If the end row is 
> reached before N rows ( say M and M < N ), then M rows will be returned to 
> the user.
> 5. If start row, end row and limit (say N ) are specified and N < number 
> of rows between start row and end row, then N rows from start row
> will be returned to the user. If N > (number of rows between start row and 
> end row (say M), then M number of rows will be returned to the
> user.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871694#comment-13871694
 ] 

ramkrishna.s.vasudevan commented on HBASE-10322:


bq.Anoop fed me the above off line.  It just seems wrong that codec need know 
if 'server' or 'client'.  Why can't it be TagsCodec and StripTagsCodec and then 
at the various junctions (client sending, server receiving, WAL writing, etc.) 
they read configuration what Codec to use or what code to use Decoding a 
particular Encoder; e.g. on server, we'd write back to the client using 
NoTagsKVCodec.
True.. But this I think has to happen with codec negotiation.  Only then the 
server and the client will know about each other what the other is using.  
TagsCodec and StripTagsCodec has to be specified in a configuration (which I 
call it as a mapping - Anoop does not like that :)) and use that on either 
side. 
May be we can see how costly is stripping out the tags from every kv in the 
CPs.. we can benchmark it once?

> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10335) AuthFailedException in zookeeper may block replication forever

2014-01-14 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871691#comment-13871691
 ] 

Lars Hofhansl commented on HBASE-10335:
---

Actually, this is somewhat unrelated, no?
{code}
+  if (LOG.isDebugEnabled()) {
+LOG.debug("Fetch salves addresses failed.", ke);
+  }
{code}

> AuthFailedException in zookeeper may block replication forever
> --
>
> Key: HBASE-10335
> URL: https://issues.apache.org/jira/browse/HBASE-10335
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, security
>Affects Versions: 0.94.15, 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Blocker
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: HBASE-10335-v1.diff, HBASE-10335-v2.diff
>
>
> ReplicationSource will rechoose sinks when encounted exceptions during 
> skipping edits to the current sink. But if the  zookeeper client for peer 
> cluster go to AUTH_FAILED state, the ReplicationSource will always get  
> AuthFailedException. The ReplicationSource does not reconnect  the peer, 
> because reconnectPeer only handle ConnectionLossException and 
> SessionExpiredException. As a result, the replication will print log: 
> {quote}
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Getting 0 
> rs from peer cluster # 20
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Slave 
> cluster looks down: 20 has 0 region servers
> {quote}
> and be blocked forever.
> I think other places may have same problems for not handling 
> AuthFailedException in zookeeper. eg: HBASE-8675.
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10342) RowKey Prefix Bloom Filter

2014-01-14 Thread Liyin Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871684#comment-13871684
 ] 

Liyin Tang commented on HBASE-10342:


This feature shall also benefit the Salted Tables as well. 

> RowKey Prefix Bloom Filter
> --
>
> Key: HBASE-10342
> URL: https://issues.apache.org/jira/browse/HBASE-10342
> Project: HBase
>  Issue Type: New Feature
>Reporter: Liyin Tang
>
> When designing HBase schema for some use cases, it is quite common to combine 
> multiple information within the RowKey. For instance, assuming that rowkey is 
> constructed as md5(id1) + id1 + id2, and user wants to scan all the rowkeys 
> which starting at id1 . In such case, the rowkey bloom filter is able to cut 
> more unnecessary seeks during the scan.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871683#comment-13871683
 ] 

stack commented on HBASE-10322:
---

bq. Another suggestion atleast to avoid changes to the codec part is to have an 
init() in the Codec.java. So once the codec is instantiated we could set this 
flag as true or false based on client or server. 

Anoop fed me the above off line.  It just seems wrong that codec need know if 
'server' or 'client'.  Why can't it be TagsCodec and StripTagsCodec and then at 
the various junctions (client sending, server receiving, WAL writing, etc.) 
they read configuration what Codec to use or what code to use Decoding a 
particular Encoder; e.g. on server, we'd write back to the client using 
NoTagsKVCodec.

Pardon me if I am making suggestion you fellas have already said won't work.

bq. The major concern with that was we will have to recreate KVs (In filter/cp) 
and byte array copying. The perf penalty is a major concern

Are we writing new KVs or creating a cell block?  If the latter, then it'll be 
no more expensive copying a KV with or without the Tags?

To get Andrew his RC the sooner, will life be easier if no tags from server to 
client?  In a later HBase we can add codec negotiation, etc?

Good stuff lads.



> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10342) RowKey Prefix Bloom Filter

2014-01-14 Thread Liyin Tang (JIRA)
Liyin Tang created HBASE-10342:
--

 Summary: RowKey Prefix Bloom Filter
 Key: HBASE-10342
 URL: https://issues.apache.org/jira/browse/HBASE-10342
 Project: HBase
  Issue Type: New Feature
Reporter: Liyin Tang


When designing HBase schema for some use cases, it is quite common to combine 
multiple information within the RowKey. For instance, assuming that rowkey is 
constructed as md5(id1) + id1 + id2, and user wants to scan all the rowkeys 
which starting at id1 . In such case, the rowkey bloom filter is able to cut 
more unnecessary seeks during the scan.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10335) AuthFailedException in zookeeper may block replication forever

2014-01-14 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871677#comment-13871677
 ] 

Liang Xie commented on HBASE-10335:
---

thanks all for reviews, will commit it shortly.

> AuthFailedException in zookeeper may block replication forever
> --
>
> Key: HBASE-10335
> URL: https://issues.apache.org/jira/browse/HBASE-10335
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, security
>Affects Versions: 0.94.15, 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Blocker
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: HBASE-10335-v1.diff, HBASE-10335-v2.diff
>
>
> ReplicationSource will rechoose sinks when encounted exceptions during 
> skipping edits to the current sink. But if the  zookeeper client for peer 
> cluster go to AUTH_FAILED state, the ReplicationSource will always get  
> AuthFailedException. The ReplicationSource does not reconnect  the peer, 
> because reconnectPeer only handle ConnectionLossException and 
> SessionExpiredException. As a result, the replication will print log: 
> {quote}
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Getting 0 
> rs from peer cluster # 20
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Slave 
> cluster looks down: 20 has 0 region servers
> {quote}
> and be blocked forever.
> I think other places may have same problems for not handling 
> AuthFailedException in zookeeper. eg: HBASE-8675.
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10282) We can't assure that the first ZK server is active server in MiniZooKeeperCluster

2014-01-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871674#comment-13871674
 ] 

stack commented on HBASE-10282:
---

Do as you see fit [~tobe].  Thanks for working on this.

> We can't assure that the first ZK server is active server in 
> MiniZooKeeperCluster
> -
>
> Key: HBASE-10282
> URL: https://issues.apache.org/jira/browse/HBASE-10282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3
>Reporter: chendihao
>Assignee: chendihao
>Priority: Minor
>
> Thanks to HBASE-3052, we're able to run multiple zk servers in minicluster. 
> However, It's confusing to keep the variable activeZKServerIndex as zero and 
> assure the first zk server is always the active one. I think returning the 
> first sever's client port is for testing and it seems that we can directly 
> return the first item of the list. Anyway, the concept of "active" here is 
> not the same as zk's. 
> It's confusing when I read the code so I think we should fix it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871672#comment-13871672
 ] 

ramkrishna.s.vasudevan commented on HBASE-10322:


As Anoop says, even in Codec negotiation HBASE-9681, the problem is same.. in 
the sense any codec we write should behave differently when it works from 
client side and from the server side.  Atleast in terms of tags.  So we should 
have a mechanism to decide whether the codec is instantiated is on the client 
or on the server to induce this behaviour.
Stripping tags is the simplest of the options, but performance was a major 
concern.  Infact in the tags patch there was a proposal to attach tags as in 
memory object in KV rather than byte array.  That would mean stripping tags 
would have been easier.

> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9343) Implement stateless scanner for Stargate

2014-01-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871673#comment-13871673
 ] 

stack commented on HBASE-9343:
--

That is sufficient justification for me [~apurtell]

> Implement stateless scanner for Stargate
> 
>
> Key: HBASE-9343
> URL: https://issues.apache.org/jira/browse/HBASE-9343
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-9343_94.00.patch, HBASE-9343_94.01.patch, 
> HBASE-9343_trunk.00.patch, HBASE-9343_trunk.01.patch, 
> HBASE-9343_trunk.01.patch, HBASE-9343_trunk.02.patch, 
> HBASE-9343_trunk.03.patch, HBASE-9343_trunk.04.patch
>
>
> The current scanner implementation for scanner stores state and hence not 
> very suitable for REST server failure scenarios. The current JIRA proposes to 
> implement a stateless scanner. In the first version of the patch, a new 
> resource class "ScanResource" has been added and all the scan parameters will 
> be specified as query params. 
> The following are the scan parameters
> startrow -  The start row for the scan.
> endrow - The end row for the scan.
> columns - The columns to scan. 
> starttime, endtime - To only retrieve columns within a specific range of 
> version timestamps,both start and end time must be specified.
> maxversions  - To limit the number of versions of each column to be returned.
> batchsize - To limit the maximum number of values returned for each call to 
> next().
> limit - The number of rows to return in the scan operation.
>  More on start row, end row and limit parameters.
> 1. If start row, end row and limit not specified, then the whole table will 
> be scanned.
> 2. If start row and limit (say N) is specified, then the scan operation will 
> return N rows from the start row specified.
> 3. If only limit parameter is specified, then the scan operation will return 
> N rows from the start of the table.
> 4. If limit and end row are specified, then the scan operation will return N 
> rows from start of table till the end row. If the end row is 
> reached before N rows ( say M and M < N ), then M rows will be returned to 
> the user.
> 5. If start row, end row and limit (say N ) are specified and N < number 
> of rows between start row and end row, then N rows from start row
> will be returned to the user. If N > (number of rows between start row and 
> end row (say M), then M number of rows will be returned to the
> user.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10335) AuthFailedException in zookeeper may block replication forever

2014-01-14 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10335:
--

Fix Version/s: 0.94.17
   0.99.0
   0.96.2
   0.98.0

> AuthFailedException in zookeeper may block replication forever
> --
>
> Key: HBASE-10335
> URL: https://issues.apache.org/jira/browse/HBASE-10335
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, security
>Affects Versions: 0.94.15, 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Blocker
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: HBASE-10335-v1.diff, HBASE-10335-v2.diff
>
>
> ReplicationSource will rechoose sinks when encounted exceptions during 
> skipping edits to the current sink. But if the  zookeeper client for peer 
> cluster go to AUTH_FAILED state, the ReplicationSource will always get  
> AuthFailedException. The ReplicationSource does not reconnect  the peer, 
> because reconnectPeer only handle ConnectionLossException and 
> SessionExpiredException. As a result, the replication will print log: 
> {quote}
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Getting 0 
> rs from peer cluster # 20
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Slave 
> cluster looks down: 20 has 0 region servers
> {quote}
> and be blocked forever.
> I think other places may have same problems for not handling 
> AuthFailedException in zookeeper. eg: HBASE-8675.
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10335) AuthFailedException in zookeeper may block replication forever

2014-01-14 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871649#comment-13871649
 ] 

Lars Hofhansl commented on HBASE-10335:
---

+1

> AuthFailedException in zookeeper may block replication forever
> --
>
> Key: HBASE-10335
> URL: https://issues.apache.org/jira/browse/HBASE-10335
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, security
>Affects Versions: 0.94.15, 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Blocker
> Attachments: HBASE-10335-v1.diff, HBASE-10335-v2.diff
>
>
> ReplicationSource will rechoose sinks when encounted exceptions during 
> skipping edits to the current sink. But if the  zookeeper client for peer 
> cluster go to AUTH_FAILED state, the ReplicationSource will always get  
> AuthFailedException. The ReplicationSource does not reconnect  the peer, 
> because reconnectPeer only handle ConnectionLossException and 
> SessionExpiredException. As a result, the replication will print log: 
> {quote}
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Getting 0 
> rs from peer cluster # 20
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Slave 
> cluster looks down: 20 has 0 region servers
> {quote}
> and be blocked forever.
> I think other places may have same problems for not handling 
> AuthFailedException in zookeeper. eg: HBASE-8675.
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10339) Mutation::getFamilyMap method was lost in 98

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871647#comment-13871647
 ] 

Hudson commented on HBASE-10339:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #75 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/75/])
HBASE-10339 Mutation::getFamilyMap method was lost in 98 (sershe: rev 1558268)
* 
/hbase/branches/0.98/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java


> Mutation::getFamilyMap method was lost in 98
> 
>
> Key: HBASE-10339
> URL: https://issues.apache.org/jira/browse/HBASE-10339
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.99.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10339.patch
>
>
> When backward compat work was done in several jiras, this method was missed. 
> First the return type was changed, then the method was rename to not break 
> the callers via new return type, but the legacy method was never re-added as 
> far as I see



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871646#comment-13871646
 ] 

Hudson commented on HBASE-10338:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #75 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/75/])
HBASE-10338. Region server fails to start with AccessController coprocessor if 
installed into RegionServerCoprocessorHost (Vandana Ayyalasomayajula) 
(apurtell: rev 1558261)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/SecureTestUtil.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java


> Region server fails to start with AccessController coprocessor if installed 
> into RegionServerCoprocessorHost
> 
>
> Key: HBASE-10338
> URL: https://issues.apache.org/jira/browse/HBASE-10338
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.98.0, 0.96.2, 0.99.0
>
> Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 
> 10338.1.patch, HBASE-10338.0.patch
>
>
> Runtime exception is being thrown when AccessController CP is used with 
> region server. This is happening as region server co processor host is 
> created before zookeeper is initialized in region server.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871644#comment-13871644
 ] 

Anoop Sam John commented on HBASE-10322:


BTW, the stripping of tags can be achieved by removing the tag from KV by the 
CP/Filter that it handles.  This will allow system tags being blocked from 
sending back and other user tags getting back to the client. (The decision can 
be taken by the CP/Filter which handles the tags)   This was from the begin of 
our discussions internally here.  Just saying.   The major concern with that 
was we will have to recreate KVs (In filter/cp) and byte array copying.  The 
perf penalty is a major concern  :(

> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10316) Canary#RegionServerMonitor#monitorRegionServers() should close the scanner returned by table.getScanner()

2014-01-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10316:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Canary#RegionServerMonitor#monitorRegionServers() should close the scanner 
> returned by table.getScanner()
> -
>
> Key: HBASE-10316
> URL: https://issues.apache.org/jira/browse/HBASE-10316
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 10316.txt
>
>
> At line 624, in the else block, ResultScanner returned by table.getScanner() 
> is not closed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10294) Some synchronization on ServerManager#onlineServers can be removed

2014-01-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10294:
---

Status: Open  (was: Patch Available)

> Some synchronization on ServerManager#onlineServers can be removed
> --
>
> Key: HBASE-10294
> URL: https://issues.apache.org/jira/browse/HBASE-10294
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 10294-v1.txt
>
>
> ServerManager#onlineServers is a ConcurrentHashMap
> Yet I found that some accesses to it are synchronized and unnecessary.
> Here is one example:
> {code}
>   public Map getOnlineServers() {
> // Presumption is that iterating the returned Map is OK.
> synchronized (this.onlineServers) {
>   return Collections.unmodifiableMap(this.onlineServers);
> {code}
> Note: not all accesses to ServerManager#onlineServers are synchronized.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10123) Change default ports; move them out of linux ephemeral port range

2014-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871640#comment-13871640
 ] 

Hadoop QA commented on HBASE-10123:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12623017/hbase-10123.v3.patch
  against trunk revision .
  ATTACHMENT ID: 12623017

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8430//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8430//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8430//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8430//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8430//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8430//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8430//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8430//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8430//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8430//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8430//console

This message is automatically generated.

> Change default ports; move them out of linux ephemeral port range
> -
>
> Key: HBASE-10123
> URL: https://issues.apache.org/jira/browse/HBASE-10123
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.96.1.1
>Reporter: stack
>Assignee: Jonathan Hsieh
>Priority: Critical
> Fix For: 0.98.0
>
> Attachments: hbase-10123.patch, hbase-10123.v2.patch, 
> hbase-10123.v3.patch
>
>
> Our defaults clash w/ the range linux assigns itself for creating come-and-go 
> ephemeral ports; likely in our history we've clashed w/ a random, short-lived 
> process.  While easy to change the defaults, we should just ship w/ defaults 
> that make sense.  We could host ourselves up into the 7 or 8k range.
> See http://www.ncftp.com/ncftpd/doc/misc/ephemeral_ports.html



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871631#comment-13871631
 ] 

Hudson commented on HBASE-10338:


SUCCESS: Integrated in HBase-TRUNK #4820 (See 
[https://builds.apache.org/job/HBase-TRUNK/4820/])
HBASE-10338. Region server fails to start with AccessController coprocessor if 
installed into RegionServerCoprocessorHost (Vandana Ayyalasomayajula) 
(apurtell: rev 1558260)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/SecureTestUtil.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java


> Region server fails to start with AccessController coprocessor if installed 
> into RegionServerCoprocessorHost
> 
>
> Key: HBASE-10338
> URL: https://issues.apache.org/jira/browse/HBASE-10338
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.98.0, 0.96.2, 0.99.0
>
> Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 
> 10338.1.patch, HBASE-10338.0.patch
>
>
> Runtime exception is being thrown when AccessController CP is used with 
> region server. This is happening as region server co processor host is 
> created before zookeeper is initialized in region server.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10339) Mutation::getFamilyMap method was lost in 98

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871632#comment-13871632
 ] 

Hudson commented on HBASE-10339:


SUCCESS: Integrated in HBase-TRUNK #4820 (See 
[https://builds.apache.org/job/HBase-TRUNK/4820/])
HBASE-10339 Mutation::getFamilyMap method was lost in 98 (sershe: rev 1558267)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Mutation.java


> Mutation::getFamilyMap method was lost in 98
> 
>
> Key: HBASE-10339
> URL: https://issues.apache.org/jira/browse/HBASE-10339
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.99.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10339.patch
>
>
> When backward compat work was done in several jiras, this method was missed. 
> First the return type was changed, then the method was rename to not break 
> the callers via new return type, but the legacy method was never re-added as 
> far as I see



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871622#comment-13871622
 ] 

Anoop Sam John edited comment on HBASE-10322 at 1/15/14 4:15 AM:
-

Selectively sending back tags is one problem..  But that is second...
The 1st problem is making codec to send tags when its Encoder encodes data from 
client to server.  The same Codec Encoder, when working in server side should 
not send back the tags.  This is where we were needing the context information. 
 Also pls note one more thing. We use a WALCellCodec whose Encoder uses the 
KVCodec for writing to the WAL. When writing to the WAL, even if it is inside a 
server, it must write tags..   We have to solve this problem.. Selective 
sending based on user is second and it might be simpler that 1st IMO.


was (Author: anoop.hbase):
Selectively sending back tags is one problem..  But this is second...
The 1st problem is making codec to send tags when its Encoder encodes data from 
client to server.  The same Codec Encoder, when working in server side should 
not send back the tags.  This is where we were needing the context information. 
 Also pls note one more thing. We use a WALCellCodec whose Encoder uses the 
KVCodec for writing to the WAL. When writing to the WAL, even if it is inside a 
server, it must write tags..   We have to solve this problem.. Selective 
sending based on user is second and it might be simpler that 1st IMO.

> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871622#comment-13871622
 ] 

Anoop Sam John commented on HBASE-10322:


Selectively sending back tags is one problem..  But this is second...
The 1st problem is making codec to send tags when its Encoder encodes data from 
client to server.  The same Codec Encoder, when working in server side should 
not send back the tags.  This is where we were needing the context information. 
 Also pls note one more thing. We use a WALCellCodec whose Encoder uses the 
KVCodec for writing to the WAL. When writing to the WAL, even if it is inside a 
server, it must write tags..   We have to solve this problem.. Selective 
sending based on user is second and it might be simpler that 1st IMO.

> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871617#comment-13871617
 ] 

Hudson commented on HBASE-10338:


FAILURE: Integrated in hbase-0.96 #258 (See 
[https://builds.apache.org/job/hbase-0.96/258/])
HBASE-10338. Region server fails to start with AccessController coprocessor if 
installed into RegionServerCoprocessorHost (Vandana Ayyalasomayajula) 
(apurtell: rev 1558262)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/SecureTestUtil.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java


> Region server fails to start with AccessController coprocessor if installed 
> into RegionServerCoprocessorHost
> 
>
> Key: HBASE-10338
> URL: https://issues.apache.org/jira/browse/HBASE-10338
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.98.0, 0.96.2, 0.99.0
>
> Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 
> 10338.1.patch, HBASE-10338.0.patch
>
>
> Runtime exception is being thrown when AccessController CP is used with 
> region server. This is happening as region server co processor host is 
> created before zookeeper is initialized in region server.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9815) Add Histogram representative of row key distribution inside a region.

2014-01-14 Thread Manukranth Kolloju (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manukranth Kolloju updated HBASE-9815:
--

Attachment: Histogram-9815.diff

Attaching the implementation based on the above paper.

> Add Histogram representative of row key distribution inside a region.
> -
>
> Key: HBASE-9815
> URL: https://issues.apache.org/jira/browse/HBASE-9815
> Project: HBase
>  Issue Type: New Feature
>  Components: HFile
>Affects Versions: 0.89-fb
>Reporter: Manukranth Kolloju
>Assignee: Manukranth Kolloju
> Fix For: 0.89-fb
>
> Attachments: Histogram-9815.diff
>
>
> Using histogram information, users can parallelize the scan workload into 
> equal sized scans based on the estimated size from the Histogram information. 
> This will help in enabling systems which are trying to perform queries on top 
> of HBase to do cost based optimization while scanning. The Idea is to keep 
> this histogram information in the HFile in the trailer and populate this on 
> compaction and flush. 
> The HRegionInterface can expose an API to return the Histogram information of 
> a region, which can be generated by merging histograms of all the hfiles.
> Implementing the histogram on the basis of 
> http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf
> http://dl.acm.org/citation.cfm?id=1951376
> and NumericHistogram from hive.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-7509) Enable RS to query a secondary datanode in parallel, if the primary takes too long

2014-01-14 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871605#comment-13871605
 ] 

Liang Xie commented on HBASE-7509:
--

Hi [~amitanand], i just assigned it to myself, please feel free to reassign 
back if you are working on it right now:)
I have made a patch on hdfs side yesterday, hopefully will make the hbase side 
stuff today,  will put the test result in current jira(probably need one or two 
days).

> Enable RS to query a secondary datanode in parallel, if the primary takes too 
> long
> --
>
> Key: HBASE-7509
> URL: https://issues.apache.org/jira/browse/HBASE-7509
> Project: HBase
>  Issue Type: Improvement
>Reporter: Amitanand Aiyer
>Assignee: Liang Xie
>Priority: Critical
> Attachments: quorumDiffs.tgz
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Work started] (HBASE-9815) Add Histogram representative of row key distribution inside a region.

2014-01-14 Thread Manukranth Kolloju (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-9815 started by Manukranth Kolloju.

> Add Histogram representative of row key distribution inside a region.
> -
>
> Key: HBASE-9815
> URL: https://issues.apache.org/jira/browse/HBASE-9815
> Project: HBase
>  Issue Type: New Feature
>  Components: HFile
>Affects Versions: 0.89-fb
>Reporter: Manukranth Kolloju
>Assignee: Manukranth Kolloju
> Fix For: 0.89-fb
>
>
> Using histogram information, users can parallelize the scan workload into 
> equal sized scans based on the estimated size from the Histogram information. 
> This will help in enabling systems which are trying to perform queries on top 
> of HBase to do cost based optimization while scanning. The Idea is to keep 
> this histogram information in the HFile in the trailer and populate this on 
> compaction and flush. 
> The HRegionInterface can expose an API to return the Histogram information of 
> a region, which can be generated by merging histograms of all the hfiles.
> Implementing the histogram on the basis of 
> http://jmlr.org/papers/volume11/ben-haim10a/ben-haim10a.pdf
> http://dl.acm.org/citation.cfm?id=1951376
> and NumericHistogram from hive.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-10323) Auto detect data block encoding in HFileOutputFormat

2014-01-14 Thread Ishan Chhabra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chhabra updated HBASE-10323:
--

Attachment: HBASE_10323-trunk-v3.patch
HBASE_10323-0.94.15-v4.patch

Changed trunk patch to work directly with DataBlockingEncoding instead of 
HfileDataBlockEncoder.

> Auto detect data block encoding in HFileOutputFormat
> 
>
> Key: HBASE-10323
> URL: https://issues.apache.org/jira/browse/HBASE-10323
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ishan Chhabra
>Assignee: Ishan Chhabra
> Fix For: 0.99.0
>
> Attachments: HBASE_10323-0.94.15-v1.patch, 
> HBASE_10323-0.94.15-v2.patch, HBASE_10323-0.94.15-v3.patch, 
> HBASE_10323-0.94.15-v4.patch, HBASE_10323-trunk-v1.patch, 
> HBASE_10323-trunk-v2.patch, HBASE_10323-trunk-v3.patch
>
>
> Currently, one has to specify the data block encoding of the table explicitly 
> using the config parameter 
> "hbase.mapreduce.hfileoutputformat.datablock.encoding" when doing a bulkload 
> load. This option is easily missed, not documented and also works differently 
> than compression, block size and bloom filter type, which are auto detected. 
> The solution would be to add support to auto detect datablock encoding 
> similar to other parameters. 
> The current patch does the following:
> 1. Automatically detects datablock encoding in HFileOutputFormat.
> 2. Keeps the legacy option of manually specifying the datablock encoding
> around as a method to override auto detections.
> 3. Moves string conf parsing to the start of the program so that it fails
> fast during starting up instead of failing during record writes. It also
> makes the internals of the program type safe.
> 4. Adds missing doc strings and unit tests for code serializing and
> deserializing config paramerters for bloom filer type, block size and
> datablock encoding.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HBASE-7509) Enable RS to query a secondary datanode in parallel, if the primary takes too long

2014-01-14 Thread Liang Xie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Xie reassigned HBASE-7509:


Assignee: Liang Xie

> Enable RS to query a secondary datanode in parallel, if the primary takes too 
> long
> --
>
> Key: HBASE-7509
> URL: https://issues.apache.org/jira/browse/HBASE-7509
> Project: HBase
>  Issue Type: Improvement
>Reporter: Amitanand Aiyer
>Assignee: Liang Xie
>Priority: Critical
> Attachments: quorumDiffs.tgz
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871597#comment-13871597
 ] 

Hudson commented on HBASE-10338:


FAILURE: Integrated in HBase-0.98 #81 (See 
[https://builds.apache.org/job/HBase-0.98/81/])
HBASE-10338. Region server fails to start with AccessController coprocessor if 
installed into RegionServerCoprocessorHost (Vandana Ayyalasomayajula) 
(apurtell: rev 1558261)
* 
/hbase/branches/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/SecureTestUtil.java
* 
/hbase/branches/0.98/hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java


> Region server fails to start with AccessController coprocessor if installed 
> into RegionServerCoprocessorHost
> 
>
> Key: HBASE-10338
> URL: https://issues.apache.org/jira/browse/HBASE-10338
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.98.0, 0.96.2, 0.99.0
>
> Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 
> 10338.1.patch, HBASE-10338.0.patch
>
>
> Runtime exception is being thrown when AccessController CP is used with 
> region server. This is happening as region server co processor host is 
> created before zookeeper is initialized in region server.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871598#comment-13871598
 ] 

ramkrishna.s.vasudevan commented on HBASE-10322:


bq.The rock bottom simplest way to do this is to just not support tags in RPC 
codecs
But from client to server it should be supported and in the WAL part it should 
be supported both ways.  for export tool alone how to identify that the client 
is doing an export?  We ended up discussing all this and came up with a patch. 
Another suggestion atleast to avoid changes to the codec part is to have an 
init() in the Codec.java.  So once the codec is instantiated we could set this 
flag as true or false based on client or server.  
So for server if the flag says false then the tags are not sent back but for 
client it is always written.  This involves changes to the Codec.java, 
introduces an init() method and decision is taken based on what is set on this 
init method.  We have a patch for this, but again it does not do completely 
what Stack wants.  Only a part of what Stack wants is solved by that. 
Anyway the User related things are just same as in the exisitng patch.  This 
whole stripping of tags is really tricky.


> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9343) Implement stateless scanner for Stargate

2014-01-14 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871599#comment-13871599
 ] 

Nick Dimiduk commented on HBASE-9343:
-

This is really nice, [~avandana]! I think this will make using this API a lot 
more intuitive for web developers. Per Andrew's request, a new section added to 
the rest package javadoc would be fantastic. Do you see deprecation of the 
existing //scanner resources in a future patch?

I do have one question though, which is: how does this interact with the 
existing row-based 
[suffix-globbing|https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/rest/package-summary.html#operation_cell_query_multiple]?
 Are these APIs compatible? Your new goodness should be a superset of that 
functionality, right?

[~apurtell]: Pending some docs, are you keen on letting this slip into your RC?

> Implement stateless scanner for Stargate
> 
>
> Key: HBASE-9343
> URL: https://issues.apache.org/jira/browse/HBASE-9343
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-9343_94.00.patch, HBASE-9343_94.01.patch, 
> HBASE-9343_trunk.00.patch, HBASE-9343_trunk.01.patch, 
> HBASE-9343_trunk.01.patch, HBASE-9343_trunk.02.patch, 
> HBASE-9343_trunk.03.patch, HBASE-9343_trunk.04.patch
>
>
> The current scanner implementation for scanner stores state and hence not 
> very suitable for REST server failure scenarios. The current JIRA proposes to 
> implement a stateless scanner. In the first version of the patch, a new 
> resource class "ScanResource" has been added and all the scan parameters will 
> be specified as query params. 
> The following are the scan parameters
> startrow -  The start row for the scan.
> endrow - The end row for the scan.
> columns - The columns to scan. 
> starttime, endtime - To only retrieve columns within a specific range of 
> version timestamps,both start and end time must be specified.
> maxversions  - To limit the number of versions of each column to be returned.
> batchsize - To limit the maximum number of values returned for each call to 
> next().
> limit - The number of rows to return in the scan operation.
>  More on start row, end row and limit parameters.
> 1. If start row, end row and limit not specified, then the whole table will 
> be scanned.
> 2. If start row and limit (say N) is specified, then the scan operation will 
> return N rows from the start row specified.
> 3. If only limit parameter is specified, then the scan operation will return 
> N rows from the start of the table.
> 4. If limit and end row are specified, then the scan operation will return N 
> rows from start of table till the end row. If the end row is 
> reached before N rows ( say M and M < N ), then M rows will be returned to 
> the user.
> 5. If start row, end row and limit (say N ) are specified and N < number 
> of rows between start row and end row, then N rows from start row
> will be returned to the user. If N > (number of rows between start row and 
> end row (say M), then M number of rows will be returned to the
> user.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10314) Add Chaos Monkey that doesn't touch the master

2014-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871560#comment-13871560
 ] 

Hadoop QA commented on HBASE-10314:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12623001/HBASE-10314-0.patch
  against trunk revision .
  ATTACHMENT ID: 12623001

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the trunk's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8429//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8429//artifact/trunk/patchprocess/patchReleaseAuditProblems.txt
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8429//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8429//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8429//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8429//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8429//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8429//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8429//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8429//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8429//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8429//console

This message is automatically generated.

> Add Chaos Monkey that doesn't touch the master
> --
>
> Key: HBASE-10314
> URL: https://issues.apache.org/jira/browse/HBASE-10314
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.98.0, 0.99.0, 0.96.1.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-10314-0.patch, HBASE-10314-0.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10334) RegionServer links in table.jsp is broken

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871558#comment-13871558
 ] 

Hudson commented on HBASE-10334:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #74 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/74/])
HBASE-10334 RegionServer links in table.jsp is broken (enis: rev 1558239)
* 
/hbase/branches/0.98/hbase-server/src/main/resources/hbase-webapps/master/table.jsp


> RegionServer links in table.jsp is broken
> -
>
> Key: HBASE-10334
> URL: https://issues.apache.org/jira/browse/HBASE-10334
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.98.0, 0.99.0
>
> Attachments: hbase-10334_v1.patch
>
>
> The links to RS's seems to be broken in table.jsp after HBASE-9892. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10334) RegionServer links in table.jsp is broken

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871544#comment-13871544
 ] 

Hudson commented on HBASE-10334:


FAILURE: Integrated in HBase-TRUNK #4819 (See 
[https://builds.apache.org/job/HBase-TRUNK/4819/])
HBASE-10334 RegionServer links in table.jsp is broken (enis: rev 1558238)
* /hbase/trunk/hbase-server/src/main/resources/hbase-webapps/master/table.jsp


> RegionServer links in table.jsp is broken
> -
>
> Key: HBASE-10334
> URL: https://issues.apache.org/jira/browse/HBASE-10334
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.98.0, 0.99.0
>
> Attachments: hbase-10334_v1.patch
>
>
> The links to RS's seems to be broken in table.jsp after HBASE-9892. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10156) Fix up the HBASE-8755 slowdown when low contention

2014-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871542#comment-13871542
 ] 

Hadoop QA commented on HBASE-10156:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12623024/10156v10.txt
  against trunk revision .
  ATTACHMENT ID: 12623024

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 21 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestHLogRecordReader

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.regionserver.wal.TestLogRolling.testLogRollOnDatanodeDeath(TestLogRolling.java:368)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8428//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8428//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8428//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8428//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8428//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8428//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8428//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8428//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8428//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8428//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8428//console

This message is automatically generated.

> Fix up the HBASE-8755 slowdown when low contention
> --
>
> Key: HBASE-10156
> URL: https://issues.apache.org/jira/browse/HBASE-10156
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: stack
>Assignee: stack
> Attachments: 10156.txt, 10156v10.txt, 10156v2.txt, 10156v3.txt, 
> 10156v4.txt, 10156v5.txt, 10156v6.txt, 10156v7.txt, 10156v9.txt, 
> Disrupting.java
>
>
> HBASE-8755 slows our writes when only a few clients.  Fix.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10341) TestAssignmentManagerOnCluster fails occasionally

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871540#comment-13871540
 ] 

Andrew Purtell commented on HBASE-10341:


Bisecting recent commits

> TestAssignmentManagerOnCluster fails occasionally
> -
>
> Key: HBASE-10341
> URL: https://issues.apache.org/jira/browse/HBASE-10341
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
> Fix For: 0.98.0, 0.99.0
>
>
> TestAssignmentManagerOnCluster has recently started failing occasionally in 
> 0.98 branch unit test runs. No failure trace available yet.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10341) TestAssignmentManagerOnCluster fails occasionally

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871539#comment-13871539
 ] 

Andrew Purtell commented on HBASE-10341:


{noformat}
---
Test set: org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster
---
Tests run: 13, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 63.101 sec <<< 
FAILURE!
testAssignRegionOnRestartedServer(org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster)
  Time elapsed: 43.065 sec  <<< FAILURE!
junit.framework.AssertionFailedError: Waiting timed out after [40,000] msec
at junit.framework.Assert.fail(Assert.java:57)
at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:193)
at org.apache.hadoop.hbase.Waiter.waitFor(Waiter.java:128)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.waitFor(HBaseTestingUtility.java:3243)
at 
org.apache.hadoop.hbase.master.TestAssignmentManagerOnCluster.testAssignRegionOnRestartedServer(TestAssignmentManagerOnCluster.java:181)
{noformat}

> TestAssignmentManagerOnCluster fails occasionally
> -
>
> Key: HBASE-10341
> URL: https://issues.apache.org/jira/browse/HBASE-10341
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
> Fix For: 0.98.0, 0.99.0
>
>
> TestAssignmentManagerOnCluster has recently started failing occasionally in 
> 0.98 branch unit test runs. No failure trace available yet.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9343) Implement stateless scanner for Stargate

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871536#comment-13871536
 ] 

Andrew Purtell commented on HBASE-9343:
---

The only thing I would ask is an update to the documentation on the new 
behaviors.

> Implement stateless scanner for Stargate
> 
>
> Key: HBASE-9343
> URL: https://issues.apache.org/jira/browse/HBASE-9343
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-9343_94.00.patch, HBASE-9343_94.01.patch, 
> HBASE-9343_trunk.00.patch, HBASE-9343_trunk.01.patch, 
> HBASE-9343_trunk.01.patch, HBASE-9343_trunk.02.patch, 
> HBASE-9343_trunk.03.patch, HBASE-9343_trunk.04.patch
>
>
> The current scanner implementation for scanner stores state and hence not 
> very suitable for REST server failure scenarios. The current JIRA proposes to 
> implement a stateless scanner. In the first version of the patch, a new 
> resource class "ScanResource" has been added and all the scan parameters will 
> be specified as query params. 
> The following are the scan parameters
> startrow -  The start row for the scan.
> endrow - The end row for the scan.
> columns - The columns to scan. 
> starttime, endtime - To only retrieve columns within a specific range of 
> version timestamps,both start and end time must be specified.
> maxversions  - To limit the number of versions of each column to be returned.
> batchsize - To limit the maximum number of values returned for each call to 
> next().
> limit - The number of rows to return in the scan operation.
>  More on start row, end row and limit parameters.
> 1. If start row, end row and limit not specified, then the whole table will 
> be scanned.
> 2. If start row and limit (say N) is specified, then the scan operation will 
> return N rows from the start row specified.
> 3. If only limit parameter is specified, then the scan operation will return 
> N rows from the start of the table.
> 4. If limit and end row are specified, then the scan operation will return N 
> rows from start of table till the end row. If the end row is 
> reached before N rows ( say M and M < N ), then M rows will be returned to 
> the user.
> 5. If start row, end row and limit (say N ) are specified and N < number 
> of rows between start row and end row, then N rows from start row
> will be returned to the user. If N > (number of rows between start row and 
> end row (say M), then M number of rows will be returned to the
> user.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HBASE-10339) Mutation::getFamilyMap method was lost in 98

2014-01-14 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HBASE-10339.
--

Resolution: Fixed

committed to 98 and trunk

> Mutation::getFamilyMap method was lost in 98
> 
>
> Key: HBASE-10339
> URL: https://issues.apache.org/jira/browse/HBASE-10339
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.99.0
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10339.patch
>
>
> When backward compat work was done in several jiras, this method was missed. 
> First the return type was changed, then the method was rename to not break 
> the callers via new return type, but the legacy method was never re-added as 
> far as I see



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9343) Implement stateless scanner for Stargate

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871535#comment-13871535
 ] 

Andrew Purtell commented on HBASE-9343:
---

We have been neglecting this issue, I apologize.

I am inclined to commit this on the grounds of having had several review cycles 
and being driven by user need. Anyone disagree?

> Implement stateless scanner for Stargate
> 
>
> Key: HBASE-9343
> URL: https://issues.apache.org/jira/browse/HBASE-9343
> Project: HBase
>  Issue Type: Improvement
>  Components: REST
>Affects Versions: 0.94.11
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.98.1, 0.99.0
>
> Attachments: HBASE-9343_94.00.patch, HBASE-9343_94.01.patch, 
> HBASE-9343_trunk.00.patch, HBASE-9343_trunk.01.patch, 
> HBASE-9343_trunk.01.patch, HBASE-9343_trunk.02.patch, 
> HBASE-9343_trunk.03.patch, HBASE-9343_trunk.04.patch
>
>
> The current scanner implementation for scanner stores state and hence not 
> very suitable for REST server failure scenarios. The current JIRA proposes to 
> implement a stateless scanner. In the first version of the patch, a new 
> resource class "ScanResource" has been added and all the scan parameters will 
> be specified as query params. 
> The following are the scan parameters
> startrow -  The start row for the scan.
> endrow - The end row for the scan.
> columns - The columns to scan. 
> starttime, endtime - To only retrieve columns within a specific range of 
> version timestamps,both start and end time must be specified.
> maxversions  - To limit the number of versions of each column to be returned.
> batchsize - To limit the maximum number of values returned for each call to 
> next().
> limit - The number of rows to return in the scan operation.
>  More on start row, end row and limit parameters.
> 1. If start row, end row and limit not specified, then the whole table will 
> be scanned.
> 2. If start row and limit (say N) is specified, then the scan operation will 
> return N rows from the start row specified.
> 3. If only limit parameter is specified, then the scan operation will return 
> N rows from the start of the table.
> 4. If limit and end row are specified, then the scan operation will return N 
> rows from start of table till the end row. If the end row is 
> reached before N rows ( say M and M < N ), then M rows will be returned to 
> the user.
> 5. If start row, end row and limit (say N ) are specified and N < number 
> of rows between start row and end row, then N rows from start row
> will be returned to the user. If N > (number of rows between start row and 
> end row (say M), then M number of rows will be returned to the
> user.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HBASE-10341) TestAssignmentManagerOnCluster fails occasionally

2014-01-14 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-10341:
--

 Summary: TestAssignmentManagerOnCluster fails occasionally
 Key: HBASE-10341
 URL: https://issues.apache.org/jira/browse/HBASE-10341
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Andrew Purtell
 Fix For: 0.98.0, 0.99.0


TestAssignmentManagerOnCluster has recently started failing occasionally in 
0.98 branch unit test runs. No failure trace available yet.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HBASE-10282) We can't assure that the first ZK server is active server in MiniZooKeeperCluster

2014-01-14 Thread chendihao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chendihao reassigned HBASE-10282:
-

Assignee: chendihao

> We can't assure that the first ZK server is active server in 
> MiniZooKeeperCluster
> -
>
> Key: HBASE-10282
> URL: https://issues.apache.org/jira/browse/HBASE-10282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3
>Reporter: chendihao
>Assignee: chendihao
>Priority: Minor
>
> Thanks to HBASE-3052, we're able to run multiple zk servers in minicluster. 
> However, It's confusing to keep the variable activeZKServerIndex as zero and 
> assure the first zk server is always the active one. I think returning the 
> first sever's client port is for testing and it seems that we can directly 
> return the first item of the list. Anyway, the concept of "active" here is 
> not the same as zk's. 
> It's confusing when I read the code so I think we should fix it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10282) We can't assure that the first ZK server is active server in MiniZooKeeperCluster

2014-01-14 Thread chendihao (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871513#comment-13871513
 ] 

chendihao commented on HBASE-10282:
---

Thank [~stack] and we(Xiaomi) will make a patch to eliminate the confusion. Can 
we reduce those two function into a killRandomZooKeeperServer() because they 
seem to have the same effect? Before doing that, we have to fix HBASE-10283 
otherwise killing the first one will occur other problems.

> We can't assure that the first ZK server is active server in 
> MiniZooKeeperCluster
> -
>
> Key: HBASE-10282
> URL: https://issues.apache.org/jira/browse/HBASE-10282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3
>Reporter: chendihao
>Priority: Minor
>
> Thanks to HBASE-3052, we're able to run multiple zk servers in minicluster. 
> However, It's confusing to keep the variable activeZKServerIndex as zero and 
> assure the first zk server is always the active one. I think returning the 
> first sever's client port is for testing and it seems that we can directly 
> return the first item of the list. Anyway, the concept of "active" here is 
> not the same as zk's. 
> It's confusing when I read the code so I think we should fix it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HBASE-9804) Startup option for holding user table deployment

2014-01-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-9804:
--

Affects Version/s: 0.98.0
Fix Version/s: 0.99.0
   0.98.1

Maybe for 0.98.1?

> Startup option for holding user table deployment
> 
>
> Key: HBASE-9804
> URL: https://issues.apache.org/jira/browse/HBASE-9804
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Priority: Minor
> Fix For: 0.98.1, 0.99.0
>
>
> Introduce a boolean configuration option, false by default, that if set to 
> 'true' will cause the master to set all user tables to disabled state at 
> startup. From there, individual tables can be onlined as normal. Add a new 
> admin method HBA#enableAll for convenience, also a new HBA#disableAll for 
> symmetry. Add shell support for sending those new admin commands.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-9804) Startup option for holding user table deployment

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871506#comment-13871506
 ] 

Andrew Purtell commented on HBASE-9804:
---

On HBASE-6873, this was mentioned as a possibly useful remediation option for 
problems with user tables affecting cluster stability.

> Startup option for holding user table deployment
> 
>
> Key: HBASE-9804
> URL: https://issues.apache.org/jira/browse/HBASE-9804
> Project: HBase
>  Issue Type: New Feature
>Reporter: Andrew Purtell
>Priority: Minor
>
> Introduce a boolean configuration option, false by default, that if set to 
> 'true' will cause the master to set all user tables to disabled state at 
> startup. From there, individual tables can be onlined as normal. Add a new 
> admin method HBA#enableAll for convenience, also a new HBA#disableAll for 
> symmetry. Add shell support for sending those new admin commands.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-6873) Clean up Coprocessor loading failure handling

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871504#comment-13871504
 ] 

Andrew Purtell commented on HBASE-6873:
---

Found it, HBASE-9804

> Clean up Coprocessor loading failure handling
> -
>
> Key: HBASE-6873
> URL: https://issues.apache.org/jira/browse/HBASE-6873
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: David Arthur
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6873.patch, 6873.patch, 6873.patch, 6873.patch, 
> 6873.patch, 6873.patch
>
>
> When registering a coprocessor with a missing dependency, the regionserver 
> gets stuck in an infinite fail loop. Restarting the regionserver and/or 
> master has no affect.
> E.g., 
> Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) 
> that is not included with HBase.
> {code}
> 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking 
> as FAILED_OPEN in ZK
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open 
> region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG regionserver.HRegion: Opening region: {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,}
> 12/09/24 13:13:15 INFO regionserver.HRegion: Setting up tabledescriptor 
> config now ...
> 12/09/24 13:13:15 INFO coprocessor.CoprocessorHost: Class 
> com.mycompany.hbase.documents.DocumentObserverCoprocessor needs to be loaded 
> from a file - file:/path/to/my-coproc.jar.
> 12/09/24 13:13:16 ERROR handler.OpenRegionHandler: Failed open of 
> region=documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b., starting 
> to roll back the global memstore size.
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3595)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3733)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:680)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3592)
>   ... 7 more
> Caused by: java.lang.NoClassDefFoundError: 
> kafka/common/NoBrokersForPartitionException
>   at java.lang.Class.getDeclaredConstructors0(Native Method)
>   at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389)
>   at java.lang.Class.getConstructor0(Class.java:2699)
>   at java.lang.Class.newInstance0(Class.java:326)
>   at java.lang.Class.newInstance(Class.java:308)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:254)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.load(CoprocessorHost.java:227)
>   at 
> org.apache.hadoop.hbase

[jira] [Commented] (HBASE-10335) AuthFailedException in zookeeper may block replication forever

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871497#comment-13871497
 ] 

Andrew Purtell commented on HBASE-10335:


+1 for 0.98. Agree it belongs everywhere.

> AuthFailedException in zookeeper may block replication forever
> --
>
> Key: HBASE-10335
> URL: https://issues.apache.org/jira/browse/HBASE-10335
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, security
>Affects Versions: 0.94.15, 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Blocker
> Attachments: HBASE-10335-v1.diff, HBASE-10335-v2.diff
>
>
> ReplicationSource will rechoose sinks when encounted exceptions during 
> skipping edits to the current sink. But if the  zookeeper client for peer 
> cluster go to AUTH_FAILED state, the ReplicationSource will always get  
> AuthFailedException. The ReplicationSource does not reconnect  the peer, 
> because reconnectPeer only handle ConnectionLossException and 
> SessionExpiredException. As a result, the replication will print log: 
> {quote}
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Getting 0 
> rs from peer cluster # 20
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Slave 
> cluster looks down: 20 has 0 region servers
> {quote}
> and be blocked forever.
> I think other places may have same problems for not handling 
> AuthFailedException in zookeeper. eg: HBASE-8675.
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871492#comment-13871492
 ] 

Andrew Purtell commented on HBASE-10322:


Our bottom line, in my opinion, is that tags don't end up in the hands of those 
who shouldn't see them.

The rock bottom simplest way to do this is to just not support tags in RPC 
codecs. Maybe we can have a separate class that keeps them for the Export tool 
specifically? Import is no problem if the user, presumably privileged, is 
building HFiles and therefore the cells within them directly. Accumulo has the 
same approach to whole file imports - no checking done, YMMV.



> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10335) AuthFailedException in zookeeper may block replication forever

2014-01-14 Thread Liang Xie (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871494#comment-13871494
 ] 

Liang Xie commented on HBASE-10335:
---

+1
Hi [~lhofhansl], [~apurtell], do you want it in 0.94/0.98 branch ?

> AuthFailedException in zookeeper may block replication forever
> --
>
> Key: HBASE-10335
> URL: https://issues.apache.org/jira/browse/HBASE-10335
> Project: HBase
>  Issue Type: Bug
>  Components: Replication, security
>Affects Versions: 0.94.15, 0.99.0
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Blocker
> Attachments: HBASE-10335-v1.diff, HBASE-10335-v2.diff
>
>
> ReplicationSource will rechoose sinks when encounted exceptions during 
> skipping edits to the current sink. But if the  zookeeper client for peer 
> cluster go to AUTH_FAILED state, the ReplicationSource will always get  
> AuthFailedException. The ReplicationSource does not reconnect  the peer, 
> because reconnectPeer only handle ConnectionLossException and 
> SessionExpiredException. As a result, the replication will print log: 
> {quote}
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Getting 0 
> rs from peer cluster # 20
> 2014-01-14,12:07:06,892 INFO 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Slave 
> cluster looks down: 20 has 0 region servers
> {quote}
> and be blocked forever.
> I think other places may have same problems for not handling 
> AuthFailedException in zookeeper. eg: HBASE-8675.
> [~apurtell]



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-6873) Clean up Coprocessor loading failure handling

2014-01-14 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871491#comment-13871491
 ] 

Gary Helmling commented on HBASE-6873:
--

bq. I was just looking for an issue I filed a while back, an option for pausing 
the master after the deployment of system tables but before deployment of user 
tables, but can't find it offhand. If meta is online it should be possible to 
alter the HTD/HCDs.

Yeah, that would be a good remediation option here and for other cases as well. 
 Constraining the failure domain to only the table the coprocessor is 
configured on would still be nice, but as you point out there is potential for 
other abuse that wouldn't be constrained anyway.

> Clean up Coprocessor loading failure handling
> -
>
> Key: HBASE-6873
> URL: https://issues.apache.org/jira/browse/HBASE-6873
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: David Arthur
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6873.patch, 6873.patch, 6873.patch, 6873.patch, 
> 6873.patch, 6873.patch
>
>
> When registering a coprocessor with a missing dependency, the regionserver 
> gets stuck in an infinite fail loop. Restarting the regionserver and/or 
> master has no affect.
> E.g., 
> Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) 
> that is not included with HBase.
> {code}
> 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking 
> as FAILED_OPEN in ZK
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open 
> region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG regionserver.HRegion: Opening region: {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,}
> 12/09/24 13:13:15 INFO regionserver.HRegion: Setting up tabledescriptor 
> config now ...
> 12/09/24 13:13:15 INFO coprocessor.CoprocessorHost: Class 
> com.mycompany.hbase.documents.DocumentObserverCoprocessor needs to be loaded 
> from a file - file:/path/to/my-coproc.jar.
> 12/09/24 13:13:16 ERROR handler.OpenRegionHandler: Failed open of 
> region=documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b., starting 
> to roll back the global memstore size.
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3595)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3733)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:680)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3592)
>   ... 7 more
> Caused by: java.lang.NoClassDefFoundError: 
> kafka/common/NoBrokersForPartitionException
>   at java.lang.Class.getDec

[jira] [Commented] (HBASE-10334) RegionServer links in table.jsp is broken

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871490#comment-13871490
 ] 

Hudson commented on HBASE-10334:


SUCCESS: Integrated in HBase-0.98 #80 (See 
[https://builds.apache.org/job/HBase-0.98/80/])
HBASE-10334 RegionServer links in table.jsp is broken (enis: rev 1558239)
* 
/hbase/branches/0.98/hbase-server/src/main/resources/hbase-webapps/master/table.jsp


> RegionServer links in table.jsp is broken
> -
>
> Key: HBASE-10334
> URL: https://issues.apache.org/jira/browse/HBASE-10334
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.98.0, 0.99.0
>
> Attachments: hbase-10334_v1.patch
>
>
> The links to RS's seems to be broken in table.jsp after HBASE-9892. 



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HBASE-10338) Region server fails to start with AccessController coprocessor if installed into RegionServerCoprocessorHost

2014-01-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-10338.


   Resolution: Fixed
Fix Version/s: 0.96.2
 Hadoop Flags: Reviewed

Committed to trunk, 0.98, and 0.96 branches. Huge thanks for reporting this and 
providing a patch [~avandana]

> Region server fails to start with AccessController coprocessor if installed 
> into RegionServerCoprocessorHost
> 
>
> Key: HBASE-10338
> URL: https://issues.apache.org/jira/browse/HBASE-10338
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Fix For: 0.98.0, 0.96.2, 0.99.0
>
> Attachments: 10338.1-0.96.patch, 10338.1-0.98.patch, 10338.1.patch, 
> 10338.1.patch, HBASE-10338.0.patch
>
>
> Runtime exception is being thrown when AccessController CP is used with 
> region server. This is happening as region server co processor host is 
> created before zookeeper is initialized in region server.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-6873) Clean up Coprocessor loading failure handling

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871482#comment-13871482
 ] 

Andrew Purtell commented on HBASE-6873:
---

Yeah but at least here someone is following along. 

bq. That doesn't quite work for coprocessors configured as table attributes 
(more like wipe the table dir from HDFS),

I was just looking for an issue I filed a while back, an option for pausing the 
master after the deployment of system tables but before deployment of user 
tables, but can't find it offhand. If meta is online it should be possible to 
alter the HTD/HCDs. 

> Clean up Coprocessor loading failure handling
> -
>
> Key: HBASE-6873
> URL: https://issues.apache.org/jira/browse/HBASE-6873
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: David Arthur
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6873.patch, 6873.patch, 6873.patch, 6873.patch, 
> 6873.patch, 6873.patch
>
>
> When registering a coprocessor with a missing dependency, the regionserver 
> gets stuck in an infinite fail loop. Restarting the regionserver and/or 
> master has no affect.
> E.g., 
> Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) 
> that is not included with HBase.
> {code}
> 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking 
> as FAILED_OPEN in ZK
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open 
> region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG regionserver.HRegion: Opening region: {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,}
> 12/09/24 13:13:15 INFO regionserver.HRegion: Setting up tabledescriptor 
> config now ...
> 12/09/24 13:13:15 INFO coprocessor.CoprocessorHost: Class 
> com.mycompany.hbase.documents.DocumentObserverCoprocessor needs to be loaded 
> from a file - file:/path/to/my-coproc.jar.
> 12/09/24 13:13:16 ERROR handler.OpenRegionHandler: Failed open of 
> region=documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b., starting 
> to roll back the global memstore size.
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3595)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3733)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:680)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3592)
>   ... 7 more
> Caused by: java.lang.NoClassDefFoundError: 
> kafka/common/NoBrokersForPartitionException
>   at java.lang.Class.getDeclaredConstructors0(Native Method)
>   at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389)
>

[jira] [Commented] (HBASE-6873) Clean up Coprocessor loading failure handling

2014-01-14 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871480#comment-13871480
 ] 

Gary Helmling commented on HBASE-6873:
--

bq. Site file edit and rolling restart, currently. Something better we should 
tackle in a follow on issue.

That doesn't quite work for coprocessors configured as table attributes (more 
like wipe the table dir from HDFS), but I've sidelined this issue enough.  
Let's move discussion to a follow on issue.

> Clean up Coprocessor loading failure handling
> -
>
> Key: HBASE-6873
> URL: https://issues.apache.org/jira/browse/HBASE-6873
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: David Arthur
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6873.patch, 6873.patch, 6873.patch, 6873.patch, 
> 6873.patch, 6873.patch
>
>
> When registering a coprocessor with a missing dependency, the regionserver 
> gets stuck in an infinite fail loop. Restarting the regionserver and/or 
> master has no affect.
> E.g., 
> Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) 
> that is not included with HBase.
> {code}
> 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking 
> as FAILED_OPEN in ZK
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open 
> region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG regionserver.HRegion: Opening region: {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,}
> 12/09/24 13:13:15 INFO regionserver.HRegion: Setting up tabledescriptor 
> config now ...
> 12/09/24 13:13:15 INFO coprocessor.CoprocessorHost: Class 
> com.mycompany.hbase.documents.DocumentObserverCoprocessor needs to be loaded 
> from a file - file:/path/to/my-coproc.jar.
> 12/09/24 13:13:16 ERROR handler.OpenRegionHandler: Failed open of 
> region=documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b., starting 
> to roll back the global memstore size.
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3595)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3733)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:680)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3592)
>   ... 7 more
> Caused by: java.lang.NoClassDefFoundError: 
> kafka/common/NoBrokersForPartitionException
>   at java.lang.Class.getDeclaredConstructors0(Native Method)
>   at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389)
>   at java.lang.Class.getConstructor0(Class.java:2699)
>   at java.lang.Class.newInstance0(Class.java:326)
>   a

[jira] [Commented] (HBASE-10282) We can't assure that the first ZK server is active server in MiniZooKeeperCluster

2014-01-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871479#comment-13871479
 ] 

stack commented on HBASE-10282:
---

What you say makes sense [~tobe]

> We can't assure that the first ZK server is active server in 
> MiniZooKeeperCluster
> -
>
> Key: HBASE-10282
> URL: https://issues.apache.org/jira/browse/HBASE-10282
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3
>Reporter: chendihao
>Priority: Minor
>
> Thanks to HBASE-3052, we're able to run multiple zk servers in minicluster. 
> However, It's confusing to keep the variable activeZKServerIndex as zero and 
> assure the first zk server is always the active one. I think returning the 
> first sever's client port is for testing and it seems that we can directly 
> return the first item of the list. Anyway, the concept of "active" here is 
> not the same as zk's. 
> It's confusing when I read the code so I think we should fix it.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-6873) Clean up Coprocessor loading failure handling

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871474#comment-13871474
 ] 

Andrew Purtell commented on HBASE-6873:
---

bq.  I'd just like to make sure we have an easy operational way of reversing 
the problem.

Site file edit and rolling restart, currently. Something better we should 
tackle in a follow on issue.

> Clean up Coprocessor loading failure handling
> -
>
> Key: HBASE-6873
> URL: https://issues.apache.org/jira/browse/HBASE-6873
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: David Arthur
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6873.patch, 6873.patch, 6873.patch, 6873.patch, 
> 6873.patch, 6873.patch
>
>
> When registering a coprocessor with a missing dependency, the regionserver 
> gets stuck in an infinite fail loop. Restarting the regionserver and/or 
> master has no affect.
> E.g., 
> Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) 
> that is not included with HBase.
> {code}
> 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking 
> as FAILED_OPEN in ZK
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open 
> region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG regionserver.HRegion: Opening region: {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,}
> 12/09/24 13:13:15 INFO regionserver.HRegion: Setting up tabledescriptor 
> config now ...
> 12/09/24 13:13:15 INFO coprocessor.CoprocessorHost: Class 
> com.mycompany.hbase.documents.DocumentObserverCoprocessor needs to be loaded 
> from a file - file:/path/to/my-coproc.jar.
> 12/09/24 13:13:16 ERROR handler.OpenRegionHandler: Failed open of 
> region=documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b., starting 
> to roll back the global memstore size.
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3595)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3733)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:680)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3592)
>   ... 7 more
> Caused by: java.lang.NoClassDefFoundError: 
> kafka/common/NoBrokersForPartitionException
>   at java.lang.Class.getDeclaredConstructors0(Native Method)
>   at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389)
>   at java.lang.Class.getConstructor0(Class.java:2699)
>   at java.lang.Class.newInstance0(Class.java:326)
>   at java.lang.Class.newInstance(Class.java:308)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHos

[jira] [Commented] (HBASE-10325) Unknown option or illegal argument:-XX:OnOutOfMemoryError=kill -9 %p

2014-01-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871475#comment-13871475
 ] 

stack commented on HBASE-10325:
---

[~chillon_m] See http://hbase.apache.org/book.html#java  Have you tried 
overriding that environment variable on startup?

> Unknown option or illegal argument:-XX:OnOutOfMemoryError=kill -9 %p
> 
>
> Key: HBASE-10325
> URL: https://issues.apache.org/jira/browse/HBASE-10325
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.96.1.1
>Reporter: chillon_m
>
> Unknown option or illegal argument: -XX:OnOutOfMemoryError=kill -9 %p. 
> Please check for incorrect spelling or review documentation of startup 
> options.
> Could not create the Java virtual machine.
> starting master, logging to 
> /home/hadoop/hbase-0.96.1.1-hadoop2/logs/hbase-hadoop-master-namenode0.hadoop.out
> Unknown option or illegal argument: -XX:OnOutOfMemoryError=kill -9 %p. 
> Please check for incorrect spelling or review documentation of startup options



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-6873) Clean up Coprocessor loading failure handling

2014-01-14 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871472#comment-13871472
 ] 

Gary Helmling commented on HBASE-6873:
--

We are certainly totally vulnerable to misbehaving coprocessors until we do 
better isolation through an externalized process or whatever.

What I'm more worried about with the config default change is rendering a 
cluster inoperable due to a misconfiguration of a coprocessor.  For example:
* create a table with a table-level coprocessor as an attribute
* the table coprocessor when loaded has an unresolvable dependency (as 
described here)
* so the first table region bounces from host to host taking down 
regionservers, failing to open

Now we have a .tableinfo sitting around with the coprocessor configured in it, 
but we can't bring the cluster back up to alter the table definition and remove 
the CP or replace it with a CP config pointing to a new jar with the required 
dependency.  Maybe I'm missing some easy way of resolving this (hopefully I 
am), but that seems nasty.  Whether here or in a follow on enhancement, I'd 
just like to make sure we have an easy operational way of reversing the problem.

Though I agree that correctness (which we're currently sacrificing with 
hbase.coprocessor.abortonerror=false) is equally or more important.

> Clean up Coprocessor loading failure handling
> -
>
> Key: HBASE-6873
> URL: https://issues.apache.org/jira/browse/HBASE-6873
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: David Arthur
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6873.patch, 6873.patch, 6873.patch, 6873.patch, 
> 6873.patch, 6873.patch
>
>
> When registering a coprocessor with a missing dependency, the regionserver 
> gets stuck in an infinite fail loop. Restarting the regionserver and/or 
> master has no affect.
> E.g., 
> Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) 
> that is not included with HBase.
> {code}
> 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking 
> as FAILED_OPEN in ZK
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open 
> region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG regionserver.HRegion: Opening region: {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,}
> 12/09/24 13:13:15 INFO regionserver.HRegion: Setting up tabledescriptor 
> config now ...
> 12/09/24 13:13:15 INFO coprocessor.CoprocessorHost: Class 
> com.mycompany.hbase.documents.DocumentObserverCoprocessor needs to be loaded 
> from a file - file:/path/to/my-coproc.jar.
> 12/09/24 13:13:16 ERROR handler.OpenRegionHandler: Failed open of 
> region=documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b., starting 
> to roll back the global memstore size.
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3595)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3733)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:9

[jira] [Commented] (HBASE-6873) Clean up Coprocessor loading failure handling

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871473#comment-13871473
 ] 

Andrew Purtell commented on HBASE-6873:
---

Just to follow up on that, we handle the security implications of coprocessors 
being a way to inject arbitrary code into the runtime by restricting who can do 
it. If the AccessController is installed, only users with CREATE or ADMIN 
privilege can set up table coprocessors. Only the superuser, effectively, can 
install system coprocessors, she who edits the hbase-site.xml file. 

> Clean up Coprocessor loading failure handling
> -
>
> Key: HBASE-6873
> URL: https://issues.apache.org/jira/browse/HBASE-6873
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: David Arthur
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6873.patch, 6873.patch, 6873.patch, 6873.patch, 
> 6873.patch, 6873.patch
>
>
> When registering a coprocessor with a missing dependency, the regionserver 
> gets stuck in an infinite fail loop. Restarting the regionserver and/or 
> master has no affect.
> E.g., 
> Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) 
> that is not included with HBase.
> {code}
> 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking 
> as FAILED_OPEN in ZK
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open 
> region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG regionserver.HRegion: Opening region: {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,}
> 12/09/24 13:13:15 INFO regionserver.HRegion: Setting up tabledescriptor 
> config now ...
> 12/09/24 13:13:15 INFO coprocessor.CoprocessorHost: Class 
> com.mycompany.hbase.documents.DocumentObserverCoprocessor needs to be loaded 
> from a file - file:/path/to/my-coproc.jar.
> 12/09/24 13:13:16 ERROR handler.OpenRegionHandler: Failed open of 
> region=documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b., starting 
> to roll back the global memstore size.
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3595)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3733)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:680)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3592)
>   ... 7 more
> Caused by: java.lang.NoClassDefFoundError: 
> kafka/common/NoBrokersForPartitionException
>   at java.lang.Class.getDeclaredConstructors0(Native Method)
>   at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389)
>   at java.lang.Class.getConstructor0(Clas

[jira] [Commented] (HBASE-6873) Clean up Coprocessor loading failure handling

2014-01-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871470#comment-13871470
 ] 

stack commented on HBASE-6873:
--

bq. Let's not forget from the beginning coprocessors install Java code into the 
RS without any process or address space isolation.

Smile.

> Clean up Coprocessor loading failure handling
> -
>
> Key: HBASE-6873
> URL: https://issues.apache.org/jira/browse/HBASE-6873
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: David Arthur
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6873.patch, 6873.patch, 6873.patch, 6873.patch, 
> 6873.patch, 6873.patch
>
>
> When registering a coprocessor with a missing dependency, the regionserver 
> gets stuck in an infinite fail loop. Restarting the regionserver and/or 
> master has no affect.
> E.g., 
> Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) 
> that is not included with HBase.
> {code}
> 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking 
> as FAILED_OPEN in ZK
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open 
> region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG regionserver.HRegion: Opening region: {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,}
> 12/09/24 13:13:15 INFO regionserver.HRegion: Setting up tabledescriptor 
> config now ...
> 12/09/24 13:13:15 INFO coprocessor.CoprocessorHost: Class 
> com.mycompany.hbase.documents.DocumentObserverCoprocessor needs to be loaded 
> from a file - file:/path/to/my-coproc.jar.
> 12/09/24 13:13:16 ERROR handler.OpenRegionHandler: Failed open of 
> region=documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b., starting 
> to roll back the global memstore size.
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3595)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3733)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:680)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3592)
>   ... 7 more
> Caused by: java.lang.NoClassDefFoundError: 
> kafka/common/NoBrokersForPartitionException
>   at java.lang.Class.getDeclaredConstructors0(Native Method)
>   at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389)
>   at java.lang.Class.getConstructor0(Class.java:2699)
>   at java.lang.Class.newInstance0(Class.java:326)
>   at java.lang.Class.newInstance(Class.java:308)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:254)
>   at 
> org.apache.hadoop.h

[jira] [Commented] (HBASE-10323) Auto detect data block encoding in HFileOutputFormat

2014-01-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871468#comment-13871468
 ] 

stack commented on HBASE-10323:
---

Looks good on a quick scan.  [~ndimiduk] You like this one?

> Auto detect data block encoding in HFileOutputFormat
> 
>
> Key: HBASE-10323
> URL: https://issues.apache.org/jira/browse/HBASE-10323
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ishan Chhabra
>Assignee: Ishan Chhabra
> Fix For: 0.99.0
>
> Attachments: HBASE_10323-0.94.15-v1.patch, 
> HBASE_10323-0.94.15-v2.patch, HBASE_10323-0.94.15-v3.patch, 
> HBASE_10323-trunk-v1.patch, HBASE_10323-trunk-v2.patch
>
>
> Currently, one has to specify the data block encoding of the table explicitly 
> using the config parameter 
> "hbase.mapreduce.hfileoutputformat.datablock.encoding" when doing a bulkload 
> load. This option is easily missed, not documented and also works differently 
> than compression, block size and bloom filter type, which are auto detected. 
> The solution would be to add support to auto detect datablock encoding 
> similar to other parameters. 
> The current patch does the following:
> 1. Automatically detects datablock encoding in HFileOutputFormat.
> 2. Keeps the legacy option of manually specifying the datablock encoding
> around as a method to override auto detections.
> 3. Moves string conf parsing to the start of the program so that it fails
> fast during starting up instead of failing during record writes. It also
> makes the internals of the program type safe.
> 4. Adds missing doc strings and unit tests for code serializing and
> deserializing config paramerters for bloom filer type, block size and
> datablock encoding.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10316) Canary#RegionServerMonitor#monitorRegionServers() should close the scanner returned by table.getScanner()

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871460#comment-13871460
 ] 

Hudson commented on HBASE-10316:


SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #53 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/53/])
HBASE-10316 Canary#RegionServerMonitor#monitorRegionServers() should close the 
scanner returned by table.getScanner() (Tedyu: rev 1558137)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/tool/Canary.java


> Canary#RegionServerMonitor#monitorRegionServers() should close the scanner 
> returned by table.getScanner()
> -
>
> Key: HBASE-10316
> URL: https://issues.apache.org/jira/browse/HBASE-10316
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 10316.txt
>
>
> At line 624, in the else block, ResultScanner returned by table.getScanner() 
> is not closed.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10294) Some synchronization on ServerManager#onlineServers can be removed

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871462#comment-13871462
 ] 

Hudson commented on HBASE-10294:


SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #53 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/53/])
HBASE-10294 ServerManager#onlineServers synchronization (Tedyu: rev 1558174)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java


> Some synchronization on ServerManager#onlineServers can be removed
> --
>
> Key: HBASE-10294
> URL: https://issues.apache.org/jira/browse/HBASE-10294
> Project: HBase
>  Issue Type: Task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 0.99.0
>
> Attachments: 10294-v1.txt
>
>
> ServerManager#onlineServers is a ConcurrentHashMap
> Yet I found that some accesses to it are synchronized and unnecessary.
> Here is one example:
> {code}
>   public Map getOnlineServers() {
> // Presumption is that iterating the returned Map is OK.
> synchronized (this.onlineServers) {
>   return Collections.unmodifiableMap(this.onlineServers);
> {code}
> Note: not all accesses to ServerManager#onlineServers are synchronized.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10332) Missing .regioninfo file during daughter open processing

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871461#comment-13871461
 ] 

Hudson commented on HBASE-10332:


SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #53 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/53/])
HBASE-10332 Missing .regioninfo file during daughter open processing 
(mbertozzi: rev 1558033)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java


> Missing .regioninfo file during daughter open processing
> 
>
> Key: HBASE-10332
> URL: https://issues.apache.org/jira/browse/HBASE-10332
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Andrew Purtell
>Assignee: Matteo Bertozzi
> Fix For: 0.98.0, 0.96.2, 0.99.0
>
> Attachments: HBASE-10332-v0.patch
>
>
> Under cluster stress testing, there are a fair amount of warnings like this:
> {noformat}
> 2014-01-12 04:52:29,183 WARN  
> [test-1,8120,1389467616661-daughterOpener=490a58c14b14a59e8d303d310684f0b0] 
> regionserver.HRegionFileSystem: .regioninfo file not found for region: 
> 490a58c14b14a59e8d303d310684f0b0
> {noformat}
> This is from HRegionFileSystem#checkRegionInfoOnFilesystem, which catches a 
> FileNotFoundException in this case and calls writeRegionInfoOnFilesystem to 
> fix up the issue.
> Is this a bug in splitting?



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10304) Running an hbase job jar: IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString

2014-01-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871464#comment-13871464
 ] 

Hadoop QA commented on HBASE-10304:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12622998/HBASE-10304.docbook.patch
  against trunk revision .
  ATTACHMENT ID: 12622998

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop1.1{color}.  The patch compiles against the hadoop 
1.1 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:433)
+
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:186)
+
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:147)
+
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:270)
+
org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:100)
+$ HADOOP_CLASSPATH=/path/to/hbase-protocol.jar:/path/to/hbase/conf hadoop jar 
MyJob.jar MyJobMainClass
+$ HADOOP_CLASSPATH=$(hbase mapredcp):/etc/hbase/conf hadoop jar MyApp.jar 
MyJobMainClass -libjars $(hbase mapredcp | tr ':' ',') ...

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8427//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8427//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/8427//console

This message is automatically generated.

> Running an hbase job jar: IllegalAccessError: class 
> com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
> com.google.protobuf.LiteralByteString
> 
>
> Key: HBASE-10304
> URL: https://issues.apache.org/jira/browse/HBASE-10304
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Affects Versions: 0.98.0, 0.96.1.1
>Reporter: stack
>Priority: Blocker
> Fix For: 0.98.0
>
> Attachments: HBASE-10304.docbook.patch, hbase-10304_not_tested.patch, 
> jobjar.xml
>
>
> (Jimmy has been working on this one internally.  I'm just the messenger 
> raising this critical issue upstream).
> So, if you make job jar an

[jira] [Commented] (HBASE-9721) RegionServer should not accept regionOpen RPC intended for another(previous) server

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871458#comment-13871458
 ] 

Hudson commented on HBASE-9721:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #53 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/53/])
HBASE-9721 RegionServer should not accept regionOpen RPC intended for 
another(previous) server (enis: rev 1557914)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/ProtobufUtil.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/trunk/hbase-protocol/src/main/java/com/google/protobuf/ZeroCopyLiteralByteString.java
* 
/hbase/trunk/hbase-protocol/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AdminProtos.java
* /hbase/trunk/hbase-protocol/src/main/protobuf/Admin.proto
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannersFromClientSide.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManager.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestAssignmentManagerOnCluster.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterFailover.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestZKBasedOpenCloseRegion.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestRegionServerNoMaster.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java


> RegionServer should not accept regionOpen RPC intended for another(previous) 
> server
> ---
>
> Key: HBASE-9721
> URL: https://issues.apache.org/jira/browse/HBASE-9721
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0, 0.96.0
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 0.98.0, 0.99.0
>
> Attachments: hbase-9721_v0.patch, hbase-9721_v1.patch, 
> hbase-9721_v2.patch, hbase-9721_v3.patch
>
>
> On a test cluster, this following events happened with ITBLL and CM leading 
> to meta being unavailable until master is restarted. 
> An RS carrying meta died, and master assigned the region to one of the RSs. 
> {code}
> 2013-10-03 23:30:06,611 INFO  
> [MASTER_META_SERVER_OPERATIONS-gs-hdp2-secure-1380781860-hbase-12:6-1] 
> master.AssignmentManager: Assigning hbase:meta,,1.1588230740 to 
> gs-hdp2-secure-1380781860-hbase-5.cs1cloud.internal,60020,1380842900820
> 2013-10-03 23:30:06,611 INFO  
> [MASTER_META_SERVER_OPERATIONS-gs-hdp2-secure-1380781860-hbase-12:6-1] 
> master.RegionStates: Transitioned {1588230740 state=OFFLINE, 
> ts=1380843006601, server=null} to {1588230740 state=PENDING_OPEN, 
> ts=1380843006611, 
> server=gs-hdp2-secure-1380781860-hbase-5.cs1cloud.internal,60020,1380842900820}
> 2013-10-03 23:30:06,611 DEBUG 
> [MASTER_META_SERVER_OPERATIONS-gs-hdp2-secure-1380781860-hbase-12:6-1] 
> master.ServerManager: New admin connection to 
> gs-hdp2-secure-1380781860-hbase-5.cs1cloud.internal,60020,1380842900820
> {code}
> At the same time, the RS that meta recently got assigned also died (due to 
> CM), and restarted: 
> {code}
> 2013-10-03 23:30:07,636 DEBUG [RpcServer.handler=17,port=6] 
> master.ServerManager: REPORT: Server 
> gs-hdp2-secure-1380781860-hbase-8.cs1cloud.internal,60020,1380843002494 came 
> back up, removed it from the dead servers list
> 2013-10-03 23:30:08,769 INFO  [RpcServer.handler=18,port=6] 
> master.ServerManager: Triggering server recovery; existingServer 
> gs-hdp2-secure-1380781860-hbase-5.cs1cloud.internal,60020,1380842900820 looks 
> stale, new 
> server:gs-hdp2-secure-1380781860-hbase-5.cs1cloud.internal,60020,1380843006362
> 2013-10-03 23:30:08,771 DEBUG [RpcServer.handler=18,port=6] 
> master.AssignmentManager: Checking region=hbase:meta,,1.1588230740, zk 
> server=gs-hdp2-secure-1380781860-hbase-5.cs1cloud.internal,60020,1380842900820
>  
> current=gs-hdp2-secure-1380781860-hbase-5.cs1cloud.internal,60020,1380842900820,
>  matches=true
> 2013-10-03 23:30:08,771 DEBUG [RpcServer.handler=18,port=6] 
> master.ServerManager: 
> Added=gs-hdp2-secure-1380781860-hbase-5.cs1cloud.internal,60020,1380842900820 
> to dead servers, submitted shutdown handler to be executed meta=true
> 2013-10-03 23:30:08,771 INFO  [RpcServer.handler=18,port=6] 
> maste

[jira] [Commented] (HBASE-10274) MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871459#comment-13871459
 ] 

Hudson commented on HBASE-10274:


SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #53 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/53/])
HBASE-10274 MiniZookeeperCluster should close ZKDatabase when shutdown 
ZooKeeperServers (chendihao via enis) (enis: rev 1557919)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java


> MiniZookeeperCluster should close ZKDatabase when shutdown ZooKeeperServers
> ---
>
> Key: HBASE-10274
> URL: https://issues.apache.org/jira/browse/HBASE-10274
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3
>Reporter: chendihao
>Assignee: chendihao
>Priority: Minor
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: HBASE-10274-0.94-v1.patch, HBASE-10274-0.94-v2.patch, 
> HBASE-10274-truck-v1.patch, HBASE-10274-truck-v2.patch, 
> HBASE-10274-truck-v2.patch
>
>
> HBASE-6820 points out the problem but not fix completely.
> killCurrentActiveZooKeeperServer() and killOneBackupZooKeeperServer() will 
> shutdown the ZooKeeperServer and need to close ZKDatabase as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10320) Avoid ArrayList.iterator() ExplicitColumnTracker

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871457#comment-13871457
 ] 

Hudson commented on HBASE-10320:


SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #53 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/53/])
HBASE-10320 Avoid ArrayList.iterator() ExplicitColumnTracker (larsh: rev 
1557948)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ExplicitColumnTracker.java


> Avoid ArrayList.iterator() ExplicitColumnTracker
> 
>
> Key: HBASE-10320
> URL: https://issues.apache.org/jira/browse/HBASE-10320
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 0.98.0, 0.96.2, 0.99.0, 0.94.17
>
> Attachments: 10320-0.94-v2.txt, 10320-0.94-v3.txt, 10320-0.94-v4.txt, 
> 10320-0.94.txt, 10320-trunk-v4.txt
>
>
> I noticed that in a profiler (sampler) run ScanQueryMatcher.setRow(...) 
> showed up at all.
> In turns out that the expensive part is iterating over the columns in 
> ExcplicitColumnTracker.reset(). I did some microbenchmarks and found that
> {code}
> private ArrayList l;
> ...
> for (int i=0; iX = l.get(i);
>...
> }
> {code}
> Is twice as fast as:
> {code}
> private ArrayList l;
> ...
> for (X : l) {
>...
> }
> {code}
> The indexed version asymptotically approaches the iterator version, but even 
> at 1m entries it is still faster.
> In my tight loop scans this provides for a 5% performance improvement overall 
> when the ExcplicitColumnTracker is used.
> Edit:
> {code}
> private X[] l;
> ...
> for (int i=0; iX = l[i];
>...
> }
> {code}
> Is even better. Apparently the JVM can even save the boundary check in each 
> iteration.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-10324) refactor deferred-log-flush/Durability related interface/code/naming to align with changed semantic of the new write thread model

2014-01-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871456#comment-13871456
 ] 

Hudson commented on HBASE-10324:


SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-1.1 #53 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-1.1/53/])
HBASE-10324 refactor deferred-log-flush/Durability related 
interface/code/naming to align with changed semantic of the new write thread 
model (Tedyu: rev 1557939)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/HTableDescriptor.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/FSHLog.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestFSErrorsExposed.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestDurability.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestHLogSplit.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/wal/TestLogRollAbort.java
* /hbase/trunk/hbase-shell/src/main/ruby/hbase/admin.rb


> refactor deferred-log-flush/Durability related interface/code/naming to align 
> with changed semantic of the new write thread model
> -
>
> Key: HBASE-10324
> URL: https://issues.apache.org/jira/browse/HBASE-10324
> Project: HBase
>  Issue Type: Improvement
>  Components: Client, regionserver
>Reporter: Feng Honghua
>Assignee: Feng Honghua
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 10324-trunk_v3.patch, HBASE-10324-trunk_v0.patch, 
> HBASE-10324-trunk_v1.patch, HBASE-10324-trunk_v2.patch
>
>
> By the new write thread model introduced by 
> [HBASE-8755|https://issues.apache.org/jira/browse/HBASE-8755], some 
> deferred-log-flush/Durability API/code/names should be change accordingly:
> 1. no timer-triggered deferred-log-flush since flush is always done by async 
> threads, so configuration 'hbase.regionserver.optionallogflushinterval' is no 
> longer needed
> 2. the async writer-syncer-notifier threads will always be triggered 
> implicitly, this semantic is that it always holds that 
> 'hbase.regionserver.optionallogflushinterval' > 0, so deferredLogSyncDisabled 
> in HRegion.java which affects durability behavior should always be false
> 3. what HTableDescriptor.isDeferredLogFlush really means is the write  can 
> return without waiting for the sync is done, so the interface name should be 
> changed to isAsyncLogFlush/setAsyncLogFlush to reflect their real meaning



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HBASE-6873) Clean up Coprocessor loading failure handling

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871452#comment-13871452
 ] 

Andrew Purtell commented on HBASE-6873:
---

I don't think that TestShell failure is related but I will look into it. Be 
back shortly.

> Clean up Coprocessor loading failure handling
> -
>
> Key: HBASE-6873
> URL: https://issues.apache.org/jira/browse/HBASE-6873
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: David Arthur
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6873.patch, 6873.patch, 6873.patch, 6873.patch, 
> 6873.patch, 6873.patch
>
>
> When registering a coprocessor with a missing dependency, the regionserver 
> gets stuck in an infinite fail loop. Restarting the regionserver and/or 
> master has no affect.
> E.g., 
> Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) 
> that is not included with HBase.
> {code}
> 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking 
> as FAILED_OPEN in ZK
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open 
> region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG regionserver.HRegion: Opening region: {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,}
> 12/09/24 13:13:15 INFO regionserver.HRegion: Setting up tabledescriptor 
> config now ...
> 12/09/24 13:13:15 INFO coprocessor.CoprocessorHost: Class 
> com.mycompany.hbase.documents.DocumentObserverCoprocessor needs to be loaded 
> from a file - file:/path/to/my-coproc.jar.
> 12/09/24 13:13:16 ERROR handler.OpenRegionHandler: Failed open of 
> region=documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b., starting 
> to roll back the global memstore size.
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3595)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3733)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:680)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3592)
>   ... 7 more
> Caused by: java.lang.NoClassDefFoundError: 
> kafka/common/NoBrokersForPartitionException
>   at java.lang.Class.getDeclaredConstructors0(Native Method)
>   at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389)
>   at java.lang.Class.getConstructor0(Class.java:2699)
>   at java.lang.Class.newInstance0(Class.java:326)
>   at java.lang.Class.newInstance(Class.java:308)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.loadInstance(CoprocessorHost.java:254)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHos

[jira] [Commented] (HBASE-10322) Strip tags from KV while sending back to client on reads

2014-01-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871455#comment-13871455
 ] 

stack commented on HBASE-10322:
---

bq. 1. Send back tags in missing cases also. But sending back visibility 
expression/ cell ACL is not correct.

This is tough.  The Visibility tags are managed by CPs.  When they are not 
present, you'd like to not return them?  Are tags grouped?  Don't send back 
system tags?

bq. 2. Don't send back tags in any case. This will a problem when a tool like 
ExportTool use the scan to export the table data. We will miss exporting the 
cell visibility/ACL.

Can we check perms of the client doing the export?  If they have access to 
'system' tags, export them?  We'd have a ACLCheckingCodec?

bq. 3. Send back tags based on some condition. It has to be per scan basis. 
Simplest way is pass some kind of attribute in Scan which says whether to send 
back tags or not. But believing some thing what scan specifies might not be 
correct IMO. Then comes the way of checking the user who is doing the scan. 
When a HBase super user doing the scan then only send back tags. So when a case 
comes like Export Tool's the execution should happen from a super user.

Should be super user or some super user-like group if they want tags; else they 
don't get them?

> Strip tags from KV while sending back to client on reads
> 
>
> Key: HBASE-10322
> URL: https://issues.apache.org/jira/browse/HBASE-10322
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: HBASE-10322.patch
>
>
> Right now we have some inconsistency wrt sending back tags on read. We do 
> this in scan when using Java client(Codec based cell block encoding). But 
> during a Get operation or when a pure PB based Scan comes we are not sending 
> back the tags.  So any of the below fix we have to do
> 1. Send back tags in missing cases also. But sending back visibility 
> expression/ cell ACL is not correct.
> 2. Don't send back tags in any case. This will a problem when a tool like 
> ExportTool use the scan to export the table data. We will miss exporting the 
> cell visibility/ACL.
> 3. Send back tags based on some condition. It has to be per scan basis. 
> Simplest way is pass some kind of attribute in Scan which says whether to 
> send back tags or not. But believing some thing what scan specifies might not 
> be correct IMO. Then comes the way of checking the user who is doing the 
> scan. When a HBase super user doing the scan then only send back tags. So 
> when a case comes like Export Tool's the execution should happen from a super 
> user.
> So IMO we should go with #3.
> Patch coming soon.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Comment Edited] (HBASE-6873) Clean up Coprocessor loading failure handling

2014-01-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13871449#comment-13871449
 ] 

Andrew Purtell edited comment on HBASE-6873 at 1/15/14 12:59 AM:
-

bq. I think a CP taking down the cluster something worthy of discussion, yes... 
another issue.

Hang on guys.

{code}
blah blah prePut (blah blah) {
   for (;;) { }
}
{code}

Now I've (eventually) taken the cluster down by jamming up all RPC workers. 
Let's not forget from the beginning coprocessors install Java code into the RS 
without any process or address space isolation. 

The fix for this issue is HBASE-4047


was (Author: apurtell):
bq. I think a CP taking down the cluster something worthy of discussion, yes... 
another issue.

Hang on guys.

{code}
blah blah prePut (blah blah) {
   for (;;;) { }
}
{code}

Now I've (eventually) taken the cluster down by jamming up all RPC workers. 
Let's not forget from the beginning coprocessors install Java code into the RS 
without any process or address space isolation. 

The fix for this issue is HBASE-4047

> Clean up Coprocessor loading failure handling
> -
>
> Key: HBASE-6873
> URL: https://issues.apache.org/jira/browse/HBASE-6873
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors, regionserver
>Affects Versions: 0.98.0
>Reporter: David Arthur
>Assignee: Andrew Purtell
>Priority: Blocker
> Fix For: 0.98.0, 0.99.0
>
> Attachments: 6873.patch, 6873.patch, 6873.patch, 6873.patch, 
> 6873.patch, 6873.patch
>
>
> When registering a coprocessor with a missing dependency, the regionserver 
> gets stuck in an infinite fail loop. Restarting the regionserver and/or 
> master has no affect.
> E.g., 
> Load coprocessor from my-coproc.jar, that uses an external dependency (kafka) 
> that is not included with HBase.
> {code}
> 12/09/24 13:13:15 INFO handler.OpenRegionHandler: Opening of region {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,} failed, marking 
> as FAILED_OPEN in ZK
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from RS_ZK_REGION_OPENING to 
> RS_ZK_REGION_FAILED_OPEN
> 12/09/24 13:13:15 INFO regionserver.HRegionServer: Received request to open 
> region: documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Attempting to transition node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG zookeeper.ZKAssign: 
> regionserver:60020-0x139f43af2a70043 Successfully transitioned node 
> 6d1e1b7bb93486f096173bd401e8ef6b from M_ZK_REGION_OFFLINE to 
> RS_ZK_REGION_OPENING
> 12/09/24 13:13:15 DEBUG regionserver.HRegion: Opening region: {NAME => 
> 'documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b.', STARTKEY => '', 
> ENDKEY => '', ENCODED => 6d1e1b7bb93486f096173bd401e8ef6b,}
> 12/09/24 13:13:15 INFO regionserver.HRegion: Setting up tabledescriptor 
> config now ...
> 12/09/24 13:13:15 INFO coprocessor.CoprocessorHost: Class 
> com.mycompany.hbase.documents.DocumentObserverCoprocessor needs to be loaded 
> from a file - file:/path/to/my-coproc.jar.
> 12/09/24 13:13:16 ERROR handler.OpenRegionHandler: Failed open of 
> region=documents,,1348505987177.6d1e1b7bb93486f096173bd401e8ef6b., starting 
> to roll back the global memstore size.
> java.lang.IllegalStateException: Could not instantiate a region instance.
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.newHRegion(HRegion.java:3595)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.openHRegion(HRegion.java:3733)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.openRegion(OpenRegionHandler.java:332)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler.process(OpenRegionHandler.java:108)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:169)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:680)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.GeneratedConstructorAccessor15.newInstance(Unknown 
> Source)
>   at 
> sun.reflect.DelegatingConstruct

  1   2   3   >