[jira] [Commented] (HBASE-13088) HBase native API was not released

2015-02-25 Thread Aditya Kishore (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336175#comment-14336175
 ] 

Aditya Kishore commented on HBASE-13088:


The patches attached to HBASE-1015 and its sub-tasks.

These patches will require refresh as I had added support for IBM JDK since the 
last post and also I will need to rebase them on the current HEADS.

Should I prepare them for both 1.x and 2.x/master or just master?

> HBase native API was not released
> -
>
> Key: HBASE-13088
> URL: https://issues.apache.org/jira/browse/HBASE-13088
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
> Fix For: 2.0.0, 1.1.0
>
>
> [~busbey] noticed that the module hbase-native-client was not part of the 
> release candidate 1.0.0RC5 in the src tarball (and neither in binary 
> arifacts). 
> I think we have added that as a part of C-API, but without implementation it 
> just sits there. 
> We should decide: 
> 1. Remove it
> 2. Add it to the release artifacts (src tarball from maven)
> Does anybody have a plan around it? A reference implementation? I do not want 
> to release it as official C API, without anything to back it up. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-13091) Split ZK Quorum on Master WebUI

2015-02-25 Thread Lars George (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336178#comment-14336178
 ] 

Lars George edited comment on HBASE-13091 at 2/25/15 8:06 AM:
--

+1 on one per line, since you usually do no have _that_ many. The only advanced 
stuff would be to show the first N lines and then show a "\+" (or similar) to 
expand to see them all if there are more. Just saying.


was (Author: larsgeorge):
+1 on one per line, since you usually do no have _that_ many. The only advanced 
stuff would be to show the first N lines and then show a "+" (or similar) to 
expand to see them all if there are more. Just saying.

> Split ZK Quorum on Master WebUI
> ---
>
> Key: HBASE-13091
> URL: https://issues.apache.org/jira/browse/HBASE-13091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1, 0.98.10.1
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
> Attachments: HBASE-13091-v0-trunk.patch, screenshot.png
>
>
> When using ZK servers or more, on the Master WebUI, this create a very large 
> column and so reduce a lot the others, splitting all the lines and creating 
> tall cells
> Splitting the ZK quorum with one per line will make it nicer and easier to 
> read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13091) Split ZK Quorum on Master WebUI

2015-02-25 Thread Lars George (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336178#comment-14336178
 ] 

Lars George commented on HBASE-13091:
-

+1 on one per line, since you usually do no have _that_ many. The only advanced 
stuff would be to show the first N lines and then show a "+" (or similar) to 
expand to see them all if there are more. Just saying.

> Split ZK Quorum on Master WebUI
> ---
>
> Key: HBASE-13091
> URL: https://issues.apache.org/jira/browse/HBASE-13091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1, 0.98.10.1
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
> Attachments: HBASE-13091-v0-trunk.patch, screenshot.png
>
>
> When using ZK servers or more, on the Master WebUI, this create a very large 
> column and so reduce a lot the others, splitting all the lines and creating 
> tall cells
> Splitting the ZK quorum with one per line will make it nicer and easier to 
> read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-13084) Add labels to VisibilityLabelsCache asynchronously causes TestShell flakey

2015-02-25 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan reopened HBASE-13084:


> Add labels to VisibilityLabelsCache asynchronously causes TestShell flakey
> --
>
> Key: HBASE-13084
> URL: https://issues.apache.org/jira/browse/HBASE-13084
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: zhangduo
>Assignee: zhangduo
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE-13084.patch, HBASE-13084_1.patch, 
> HBASE-13084_2.patch, HBASE-13084_2.patch, HBASE-13084_2.patch, 
> HBASE-13084_2.patch
>
>
> As discussed in HBASE-12953, we found this error in PreCommit log
> https://builds.apache.org/job/PreCommit-HBASE-Build/12918/artifact/hbase-shell/target/surefire-reports/org.apache.hadoop.hbase.client.TestShell-output.txt
> {noformat}
>   1) Error:
> test_The_get/put_methods_should_work_for_data_written_with_Visibility(Hbase::VisibilityLabelsAdminMethodsTest):
> ArgumentError: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.security.visibility.InvalidLabelException: Label 
> 'TEST_VISIBILITY' doesn't exists
>   at 
> org.apache.hadoop.hbase.security.visibility.VisibilityController.setAuths(VisibilityController.java:808)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.setAuths(VisibilityLabelsProtos.java:6036)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService.callMethod(VisibilityLabelsProtos.java:6219)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6867)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1707)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1689)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31309)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2038)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:744)
> 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-shell/src/main/ruby/hbase/visibility_labels.rb:84:in
>  `set_auths'
> ./src/test/ruby/hbase/visibility_labels_admin_test.rb:77:in 
> `test_The_get/put_methods_should_work_for_data_written_with_Visibility'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   2) Error:
> test_The_set/clear_methods_should_work_with_authorizations(Hbase::VisibilityLabelsAdminMethodsTest):
> ArgumentError: No authentication set for the given user jenkins
> 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-shell/src/main/ruby/hbase/visibility_labels.rb:97:in
>  `get_auths'
> ./src/test/ruby/hbase/visibility_labels_admin_test.rb:57:in 
> `test_The_set/clear_methods_should_work_with_authorizations'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
> {noformat}
> This is the test code
> {code:title=visibility_labels_admin_test.rb}
>   label = 'TEST_VISIBILITY'
>   user = org.apache.hadoop.hbase.security.User.getCurrent().getName();
>   visibility_admin.add_labels(label)
>   visibility_admin.set_auths(user, label)
> {code}
> It says 'label does not exists' when calling set_auths.
> Then I add some ugly logs in  DefaultVisibilityLabelServiceImpl and 
> VisibilityLabelsCache.
> {code:title=DefaultVisibilityLabelServiceImpl.java}
>   public OperationStatus[] addLabels(List labels) throws IOException {
> ...
> if (mutateLabelsRegion(puts, finalOpStatus)) {
>   updateZk(true);
> }
> for (byte[] label : labels) {
>   String labelStr = Bytes.toString(label);
>   LOG.info(labelStr + "=" + 
> this.labelsCache.getLabelOrdinal(labelStr));
> }
> ...
>   }
> {code}
> {code:title=VisibilityLabelsCache.java}
>   public void refreshLabelsCache(byte[] data) throws IOException {
> LOG.info("refresh", new Exception());
> ...
>   }
> {code}
> And I modified TestVisibilityLabelsWithCustomVisLabService to use 
> DefaultVisibilityLabelServiceImpl, then collected the logs of setupBeforeClass
> {noformat}
> 2015-02-21 20:39:16,362 INFO  
> [B.defaultRpcServer.handler=0,queue=0,port=42678

[jira] [Updated] (HBASE-13084) Add labels to VisibilityLabelsCache asynchronously causes TestShell flakey

2015-02-25 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13084:
---
Attachment: HBASE-13084_2_disable_test.patch

Just disabled the test by commenting the script related to the failing test 
cases.  Need to look into it again later. May be in a couple of days will get 
back to it.

> Add labels to VisibilityLabelsCache asynchronously causes TestShell flakey
> --
>
> Key: HBASE-13084
> URL: https://issues.apache.org/jira/browse/HBASE-13084
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: zhangduo
>Assignee: zhangduo
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE-13084.patch, HBASE-13084_1.patch, 
> HBASE-13084_2.patch, HBASE-13084_2.patch, HBASE-13084_2.patch, 
> HBASE-13084_2.patch, HBASE-13084_2_disable_test.patch
>
>
> As discussed in HBASE-12953, we found this error in PreCommit log
> https://builds.apache.org/job/PreCommit-HBASE-Build/12918/artifact/hbase-shell/target/surefire-reports/org.apache.hadoop.hbase.client.TestShell-output.txt
> {noformat}
>   1) Error:
> test_The_get/put_methods_should_work_for_data_written_with_Visibility(Hbase::VisibilityLabelsAdminMethodsTest):
> ArgumentError: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.security.visibility.InvalidLabelException: Label 
> 'TEST_VISIBILITY' doesn't exists
>   at 
> org.apache.hadoop.hbase.security.visibility.VisibilityController.setAuths(VisibilityController.java:808)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.setAuths(VisibilityLabelsProtos.java:6036)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService.callMethod(VisibilityLabelsProtos.java:6219)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6867)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1707)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1689)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31309)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2038)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:744)
> 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-shell/src/main/ruby/hbase/visibility_labels.rb:84:in
>  `set_auths'
> ./src/test/ruby/hbase/visibility_labels_admin_test.rb:77:in 
> `test_The_get/put_methods_should_work_for_data_written_with_Visibility'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   2) Error:
> test_The_set/clear_methods_should_work_with_authorizations(Hbase::VisibilityLabelsAdminMethodsTest):
> ArgumentError: No authentication set for the given user jenkins
> 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-shell/src/main/ruby/hbase/visibility_labels.rb:97:in
>  `get_auths'
> ./src/test/ruby/hbase/visibility_labels_admin_test.rb:57:in 
> `test_The_set/clear_methods_should_work_with_authorizations'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
> {noformat}
> This is the test code
> {code:title=visibility_labels_admin_test.rb}
>   label = 'TEST_VISIBILITY'
>   user = org.apache.hadoop.hbase.security.User.getCurrent().getName();
>   visibility_admin.add_labels(label)
>   visibility_admin.set_auths(user, label)
> {code}
> It says 'label does not exists' when calling set_auths.
> Then I add some ugly logs in  DefaultVisibilityLabelServiceImpl and 
> VisibilityLabelsCache.
> {code:title=DefaultVisibilityLabelServiceImpl.java}
>   public OperationStatus[] addLabels(List labels) throws IOException {
> ...
> if (mutateLabelsRegion(puts, finalOpStatus)) {
>   updateZk(true);
> }
> for (byte[] label : labels) {
>   String labelStr = Bytes.toString(label);
>   LOG.info(labelStr + "=" + 
> this.labelsCache.getLabelOrdinal(labelStr));
> }
> ...
>   }
> {code}
> {code:title=VisibilityLabelsCache.java}
>   public void refreshLabelsCache(byte[] data) throws IOException {
> LOG.info("refresh", new Exception());
> ...
>   }
> {co

[jira] [Created] (HBASE-13098) HBase Connection Control

2015-02-25 Thread Ashish Singhi (JIRA)
Ashish Singhi created HBASE-13098:
-

 Summary: HBase Connection Control
 Key: HBASE-13098
 URL: https://issues.apache.org/jira/browse/HBASE-13098
 Project: HBase
  Issue Type: New Feature
  Components: security
Affects Versions: 0.98.10
Reporter: Ashish Singhi
Assignee: Ashish Singhi


It is desirable to set the limit on the number of client connections permitted 
to the HBase server by controlling with certain system variables/parameters. 
Too many connections to the HBase server imply too many queries and MR jobs 
running on HBase. This can slow down the performance of the system and lead to 
denial of service. Hence such connections need to be controlled. Using too many 
connections may just cause thrashing rather than get more useful work done.
This is kind off inspired from 
http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/#.VO2JXXyUe9y



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13099) Scans as in DynamoDB

2015-02-25 Thread Nicolas Liochon (JIRA)
Nicolas Liochon created HBASE-13099:
---

 Summary: Scans as in DynamoDB
 Key: HBASE-13099
 URL: https://issues.apache.org/jira/browse/HBASE-13099
 Project: HBase
  Issue Type: Brainstorming
  Components: Client, regionserver
Reporter: Nicolas Liochon


cc: [~saint@gmail.com] - as discussed offline.

DynamoDB has a very simple way to manage scans server side:
??citation??
The data returned from a Query or Scan operation is limited to 1 MB; this means 
that if you scan a table that has more than 1 MB of data, you'll need to 
perform another Scan operation to continue to the next 1 MB of data in the 
table.

If you query or scan for specific attributes that match values that amount to 
more than 1 MB of data, you'll need to perform another Query or Scan request 
for the next 1 MB of data. To do this, take the LastEvaluatedKey value from the 
previous request, and use that value as the ExclusiveStartKey in the next 
request. This will let you progressively query or scan for new data in 1 MB 
increments.

When the entire result set from a Query or Scan has been processed, the 
LastEvaluatedKey is null. This indicates that the result set is complete (i.e. 
the operation processed the “last page” of data).
??citation??

This means that there is no state server side: the work is done client side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13098) HBase Connection Control

2015-02-25 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13098:
--
Attachment: HBase Connection Control.pdf

Attached a simple design document.

> HBase Connection Control
> 
>
> Key: HBASE-13098
> URL: https://issues.apache.org/jira/browse/HBASE-13098
> Project: HBase
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.98.10
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Attachments: HBase Connection Control.pdf
>
>
> It is desirable to set the limit on the number of client connections 
> permitted to the HBase server by controlling with certain system 
> variables/parameters. Too many connections to the HBase server imply too many 
> queries and MR jobs running on HBase. This can slow down the performance of 
> the system and lead to denial of service. Hence such connections need to be 
> controlled. Using too many connections may just cause thrashing rather than 
> get more useful work done.
> This is kind off inspired from 
> http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/#.VO2JXXyUe9y



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13084) Add labels to VisibilityLabelsCache asynchronously causes TestShell flakey

2015-02-25 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336326#comment-14336326
 ] 

zhangduo commented on HBASE-13084:
--

[~ram_krish] I think the problem is what you have posted above. The actual zk 
update event is not fired since the zookeeper EventThread is stuck in 
replication related operations.
{noformat}
2015-02-25 17:51:40,714 WARN  [main-EventThread] 
zookeeper.RecoverableZooKeeper(144): Unable to create ZooKeeper Connection
java.net.UnknownHostException: zk2
at java.net.InetAddress.getAllByName0(InetAddress.java:1250)
at java.net.InetAddress.getAllByName(InetAddress.java:1162)
at java.net.InetAddress.getAllByName(InetAddress.java:1098)
at 
org.apache.zookeeper.client.StaticHostProvider.(StaticHostProvider.java:61)
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:445)
at org.apache.zookeeper.ZooKeeper.(ZooKeeper.java:380)
at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:142)
at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:222)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:479)
at 
org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
at 
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:102)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:884)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:644)
at sun.reflect.GeneratedConstructorAccessor25.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
at 
org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:424)
at 
org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:327)
at 
org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:147)
at 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.init(HBaseInterClusterReplicationEndpoint.java:85)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.getReplicationSource(ReplicationSourceManager.java:422)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.addSource(ReplicationSourceManager.java:248)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.peerListChanged(ReplicationSourceManager.java:515)
at 
org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl$PeersWatcher.nodeChildrenChanged(ReplicationTrackerZKImpl.java:187)
at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:419)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
{noformat}

So if it is failed fast enough(we can see this)
{noformat}
2015-02-25 17:52:05,580 ERROR [main-EventThread] 
zookeeper.ZooKeeperWatcher(521): hconnection-0x78acc5540x0, 
quorum=zk2:2182,zk1:2182,zk3:2182, baseZNode=/hbase-prod Received unexpected 
KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$OperationTimeoutException: KeeperErrorCode 
= OperationTimeout
at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:145)
at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:222)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:479)
at 
org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
at 
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:102)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:884)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:644)
at sun.reflect.GeneratedConstructorAccessor25.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
at 
org.apache.hadoop.hbase.client.Connec

[jira] [Updated] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-12990:
-
Attachment: HBASE-12990.v5.patch

retry

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-12990:
-
Status: Open  (was: Patch Available)

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-12990:
-
Status: Patch Available  (was: Open)

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13084) Add labels to VisibilityLabelsCache asynchronously causes TestShell flakey

2015-02-25 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336422#comment-14336422
 ] 

ramkrishna.s.vasudevan commented on HBASE-13084:


What I found is that even a sleep of 10 secs was not enough some times.  The 
reason should  be because of the ZK error that we got in the Replication 
related test cases.  And that is why I went on with commenting this testcase 
out and look out what is happening with the replication test cases in 
TestShell.  
May be a better ideal would be to create TestReplicatonShell?  And move those 
test cases to a seperated one so that other testcases are running fine?

> Add labels to VisibilityLabelsCache asynchronously causes TestShell flakey
> --
>
> Key: HBASE-13084
> URL: https://issues.apache.org/jira/browse/HBASE-13084
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: zhangduo
>Assignee: zhangduo
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE-13084.patch, HBASE-13084_1.patch, 
> HBASE-13084_2.patch, HBASE-13084_2.patch, HBASE-13084_2.patch, 
> HBASE-13084_2.patch, HBASE-13084_2_disable_test.patch
>
>
> As discussed in HBASE-12953, we found this error in PreCommit log
> https://builds.apache.org/job/PreCommit-HBASE-Build/12918/artifact/hbase-shell/target/surefire-reports/org.apache.hadoop.hbase.client.TestShell-output.txt
> {noformat}
>   1) Error:
> test_The_get/put_methods_should_work_for_data_written_with_Visibility(Hbase::VisibilityLabelsAdminMethodsTest):
> ArgumentError: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.security.visibility.InvalidLabelException: Label 
> 'TEST_VISIBILITY' doesn't exists
>   at 
> org.apache.hadoop.hbase.security.visibility.VisibilityController.setAuths(VisibilityController.java:808)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.setAuths(VisibilityLabelsProtos.java:6036)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService.callMethod(VisibilityLabelsProtos.java:6219)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6867)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:1707)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:1689)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31309)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2038)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:744)
> 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-shell/src/main/ruby/hbase/visibility_labels.rb:84:in
>  `set_auths'
> ./src/test/ruby/hbase/visibility_labels_admin_test.rb:77:in 
> `test_The_get/put_methods_should_work_for_data_written_with_Visibility'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
>   2) Error:
> test_The_set/clear_methods_should_work_with_authorizations(Hbase::VisibilityLabelsAdminMethodsTest):
> ArgumentError: No authentication set for the given user jenkins
> 
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-shell/src/main/ruby/hbase/visibility_labels.rb:97:in
>  `get_auths'
> ./src/test/ruby/hbase/visibility_labels_admin_test.rb:57:in 
> `test_The_set/clear_methods_should_work_with_authorizations'
> org/jruby/RubyProc.java:270:in `call'
> org/jruby/RubyKernel.java:2105:in `send'
> org/jruby/RubyArray.java:1620:in `each'
> org/jruby/RubyArray.java:1620:in `each'
> {noformat}
> This is the test code
> {code:title=visibility_labels_admin_test.rb}
>   label = 'TEST_VISIBILITY'
>   user = org.apache.hadoop.hbase.security.User.getCurrent().getName();
>   visibility_admin.add_labels(label)
>   visibility_admin.set_auths(user, label)
> {code}
> It says 'label does not exists' when calling set_auths.
> Then I add some ugly logs in  DefaultVisibilityLabelServiceImpl and 
> VisibilityLabelsCache.
> {code:title=DefaultVisibilityLabelServiceImpl.java}
>   public OperationStatus[] addLabels(List labels) throws IOException {
> ...
> if (mutateLabelsRegion(puts, finalOpStatus)) {
>   updateZk(true);
> }
> for (byte[] label : labels) {
>   String labelStr = Bytes.toString(label)

[jira] [Commented] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336457#comment-14336457
 ] 

Hadoop QA commented on HBASE-12990:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12700718/HBASE-12990.v5.patch
  against master branch at commit c651271f5759f39f28209a50ab88a62d86b7.
  ATTACHMENT ID: 12700718

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 35 new 
or modified tests.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at org.apache.hadoop.hbase.client.Result.getColumnLatestCell(Result.java:322)
at 
org.apache.hadoop.hbase.namespace.TestNamespaceAuditor.testRegionMerge(TestNamespaceAuditor.java:307)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12959//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12959//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12959//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12959//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12959//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12959//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12959//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12959//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12959//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12959//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12959//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12959//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12959//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12959//console

This message is automatically generated.

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13084) Add labels to VisibilityLabelsCache asynchronously causes TestShell flakey

2015-02-25 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336458#comment-14336458
 ] 

zhangduo commented on HBASE-13084:
--

I tried to set a small retry interval but met other problems...
{noformat}
14598 2015-02-25 20:54:40,797 WARN  [main-EventThread] zookeeper.ZKUtil(482): 
hconnection-0x6b387ba30x0, quorum=server1.cie.com:2181, baseZNode=/hbase Unable 
to set watcher on znode (/hbase/hbaseid)
14599 org.apache.zookeeper.KeeperException$ConnectionLossException: 
KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
14600 at 
org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
14601 at 
org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
14602 at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
14603 at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:222)
14604 at 
org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:479)
14605 at 
org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65)
14606 at 
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:102)
14607 at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:884)
14608 at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:644)
14609 at sun.reflect.GeneratedConstructorAccessor26.newInstance(Unknown 
Source)
14610 at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
14611 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
14612 at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
14613 at 
org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:424)
14614 at 
org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:327)
14615 at 
org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:147)
14616 at 
org.apache.hadoop.hbase.replication.regionserver.HBaseInterClusterReplicationEndpoint.init(HBaseInterClusterReplicationEndpoint.java:85)
14617 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.getReplicationSource(ReplicationSourceManager.java:422)
14618 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.addSource(ReplicationSourceManager.java:248)
14619 at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager.peerListChanged(ReplicationSourceManager.java:515)
14620 at 
org.apache.hadoop.hbase.replication.ReplicationTrackerZKImpl$PeersWatcher.nodeChildrenChanged(ReplicationTrackerZKImpl.java:187)
14621 at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.process(ZooKeeperWatcher.java:426)
14622 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
14623 at 
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
{noformat}

Now I agree to seperate replication shell tests to another unit test. Let's 
close this issue and open a new issue to do it?
Thanks. [~ram_krish]

> Add labels to VisibilityLabelsCache asynchronously causes TestShell flakey
> --
>
> Key: HBASE-13084
> URL: https://issues.apache.org/jira/browse/HBASE-13084
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: zhangduo
>Assignee: zhangduo
> Fix For: 2.0.0, 1.1.0
>
> Attachments: HBASE-13084.patch, HBASE-13084_1.patch, 
> HBASE-13084_2.patch, HBASE-13084_2.patch, HBASE-13084_2.patch, 
> HBASE-13084_2.patch, HBASE-13084_2_disable_test.patch
>
>
> As discussed in HBASE-12953, we found this error in PreCommit log
> https://builds.apache.org/job/PreCommit-HBASE-Build/12918/artifact/hbase-shell/target/surefire-reports/org.apache.hadoop.hbase.client.TestShell-output.txt
> {noformat}
>   1) Error:
> test_The_get/put_methods_should_work_for_data_written_with_Visibility(Hbase::VisibilityLabelsAdminMethodsTest):
> ArgumentError: org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.security.visibility.InvalidLabelException: Label 
> 'TEST_VISIBILITY' doesn't exists
>   at 
> org.apache.hadoop.hbase.security.visibility.VisibilityController.setAuths(VisibilityController.java:808)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.VisibilityLabelsProtos$VisibilityLabelsService$1.setAuths(VisibilityLabelsProtos.java:6036)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.Vis

[jira] [Updated] (HBASE-13098) HBase Connection Control

2015-02-25 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13098:
--
Attachment: HBASE-13098.patch

> HBase Connection Control
> 
>
> Key: HBASE-13098
> URL: https://issues.apache.org/jira/browse/HBASE-13098
> Project: HBase
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.98.10
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Attachments: HBASE-13098.patch, HBase Connection Control.pdf
>
>
> It is desirable to set the limit on the number of client connections 
> permitted to the HBase server by controlling with certain system 
> variables/parameters. Too many connections to the HBase server imply too many 
> queries and MR jobs running on HBase. This can slow down the performance of 
> the system and lead to denial of service. Hence such connections need to be 
> controlled. Using too many connections may just cause thrashing rather than 
> get more useful work done.
> This is kind off inspired from 
> http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/#.VO2JXXyUe9y



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13098) HBase Connection Control

2015-02-25 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13098:
--
Fix Version/s: 0.98.11
   1.1.0
   1.0.1
   2.0.0
   Status: Patch Available  (was: Open)

> HBase Connection Control
> 
>
> Key: HBASE-13098
> URL: https://issues.apache.org/jira/browse/HBASE-13098
> Project: HBase
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.98.10
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: HBASE-13098.patch, HBase Connection Control.pdf
>
>
> It is desirable to set the limit on the number of client connections 
> permitted to the HBase server by controlling with certain system 
> variables/parameters. Too many connections to the HBase server imply too many 
> queries and MR jobs running on HBase. This can slow down the performance of 
> the system and lead to denial of service. Hence such connections need to be 
> controlled. Using too many connections may just cause thrashing rather than 
> get more useful work done.
> This is kind off inspired from 
> http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/#.VO2JXXyUe9y



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-12990:
--
Attachment: HBASE-12990.v5.patch

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch, HBASE-12990.v5.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13098) HBase Connection Control

2015-02-25 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336481#comment-14336481
 ] 

Ashish Singhi commented on HBASE-13098:
---

Attached patch.
Please review and share your thoughts or suggestions if you have any.
Thanks

> HBase Connection Control
> 
>
> Key: HBASE-13098
> URL: https://issues.apache.org/jira/browse/HBASE-13098
> Project: HBase
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.98.10
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: HBASE-13098.patch, HBase Connection Control.pdf
>
>
> It is desirable to set the limit on the number of client connections 
> permitted to the HBase server by controlling with certain system 
> variables/parameters. Too many connections to the HBase server imply too many 
> queries and MR jobs running on HBase. This can slow down the performance of 
> the system and lead to denial of service. Hence such connections need to be 
> controlled. Using too many connections may just cause thrashing rather than 
> get more useful work done.
> This is kind off inspired from 
> http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/#.VO2JXXyUe9y



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13099) Scans as in DynamoDB

2015-02-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336521#comment-14336521
 ] 

stack commented on HBASE-13099:
---

We use state of Result (null, empty) to flag on client side state of scan. 
[~jonathan.lawlor] is adding 'partial' flag on result now to do 'chunking', to 
indicate the Result is a partial on the row which a client probably doesn't 
care about but the running Scan does (this flag is overloaded).

Where would we tag on the LastEvaluatedKey?  Would it just be the last KV in 
the Result?  Could client-side scan read this and use it going back to the 
server?

Would be good disconnecting client and server.

On serverside, when a lease expires, we do this to clean up outstanding region 
scanners:

@Override
public synchronized void close() {
  if (storeHeap != null) {
storeHeap.close();
storeHeap = null;
  }
  if (joinedHeap != null) {
joinedHeap.close();
joinedHeap = null;
  }
  // no need to synchronize here.
  scannerReadPoints.remove(this);
  this.filterClosed = true;
}

Probably need to keep the above or at least revisit too.  A timer on scanner 
serverside with returning after we've done "10 seconds" or "1MB" is coming up 
in issues elsewhere. The serverside lease-checking facility might be the place 
to do this -- it already tries to clean up expired serverside scanners. It 
could on a period check outstanding scans for where they are.  Probably better 
to just rip out this lease checking thing and move the checks into the region 
scanner itself; it will know where it is and so rather than have foreign thread 
interrupt, interrupt itself (works unless scanner gets stuck -- but I'd guess 
Lease interrupting running scanner probably don't work either).

> Scans as in DynamoDB
> 
>
> Key: HBASE-13099
> URL: https://issues.apache.org/jira/browse/HBASE-13099
> Project: HBase
>  Issue Type: Brainstorming
>  Components: Client, regionserver
>Reporter: Nicolas Liochon
>
> cc: [~saint@gmail.com] - as discussed offline.
> DynamoDB has a very simple way to manage scans server side:
> ??citation??
> The data returned from a Query or Scan operation is limited to 1 MB; this 
> means that if you scan a table that has more than 1 MB of data, you'll need 
> to perform another Scan operation to continue to the next 1 MB of data in the 
> table.
> If you query or scan for specific attributes that match values that amount to 
> more than 1 MB of data, you'll need to perform another Query or Scan request 
> for the next 1 MB of data. To do this, take the LastEvaluatedKey value from 
> the previous request, and use that value as the ExclusiveStartKey in the next 
> request. This will let you progressively query or scan for new data in 1 MB 
> increments.
> When the entire result set from a Query or Scan has been processed, the 
> LastEvaluatedKey is null. This indicates that the result set is complete 
> (i.e. the operation processed the “last page” of data).
> ??citation??
> This means that there is no state server side: the work is done client side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13098) HBase Connection Control

2015-02-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336546#comment-14336546
 ] 

stack commented on HBASE-13098:
---

Please review current state of RPC and fold into your design what is missing. 
We already bound inbound traffic.  Point out what is wrong w/ our approach.  
Thanks.

> HBase Connection Control
> 
>
> Key: HBASE-13098
> URL: https://issues.apache.org/jira/browse/HBASE-13098
> Project: HBase
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.98.10
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: HBASE-13098.patch, HBase Connection Control.pdf
>
>
> It is desirable to set the limit on the number of client connections 
> permitted to the HBase server by controlling with certain system 
> variables/parameters. Too many connections to the HBase server imply too many 
> queries and MR jobs running on HBase. This can slow down the performance of 
> the system and lead to denial of service. Hence such connections need to be 
> controlled. Using too many connections may just cause thrashing rather than 
> get more useful work done.
> This is kind off inspired from 
> http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/#.VO2JXXyUe9y



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13098) HBase Connection Control

2015-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336564#comment-14336564
 ] 

Hadoop QA commented on HBASE-13098:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12700759/HBASE-13098.patch
  against master branch at commit c651271f5759f39f28209a50ab88a62d86b7.
  ATTACHMENT ID: 12700759

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 6 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1939 checkstyle errors (more than the master's current 1938 errors).

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+LOG.warn("Total connections number is: " + totalConnectionsNum + 
", it is a invalid value.");
+fail("Master username is assumed system call. ConnectionControl 
connectionFinished should succeed.");
+fail("RegionServer username is assumed system call. ConnectionControl 
connectionFinished should return true");
+  fail("Master username is assumed system call. ConnectionControl 
connectionFinished should return true");
+  fail("RegionServer username is assumed system call. ConnectionControl 
connectionFinished should return true");

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.TestCheckTestClasses

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12960//console

This message is automatically generated.

> HBase Connection Control
> 
>
> Key: HBASE-13098
> URL: https://issues.apache.org/jira/browse/HBASE-13098
> Project: HBase
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.98.10
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: HBASE-13098.patch, HBase Connection Control.

[jira] [Updated] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-12990:
-
Status: Open  (was: Patch Available)

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch, HBASE-12990.v5.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-12990:
-
Attachment: HBASE-12990.v6.patch

Different defaults for MetaTableAccessor.getTableRegions, by default it returns 
offline regions too, but its counterpart in MetaScanner hase opposite 
behaviour, it didn't returns offline/split by default. That broke test.
Fixed that and validated that all places use correct defaults (or explicitly 
specify inclusion of offline/split regions) 

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch, HBASE-12990.v5.patch, HBASE-12990.v6.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13097) Netty PooledByteBufAllocator cause OOM in some unit test

2015-02-25 Thread Jurriaan Mous (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336635#comment-14336635
 ] 

Jurriaan Mous commented on HBASE-13097:
---

I was certainly aware that having multiple connections is heavy. Thats why I 
cleaned up a lot of Connections in the past in tests in HBASE-12796. It is 
recommended by Netty to recycle EventLoopGroups for all bootstrap creation but 
configurations could differ between connections so sharing them is not easy. 
Maybe we could detect the usage of the same config options and recycle 
RpcClients? The best option seems to be to limit the amount of AsyncRpcClient 
and thus Connection creation.

[~Apache9] Are you sure each bootstrap has its own PooledByteBufAllocator? 
Since the bootstrap creation links to the default static PooledByteBufAllocator 
instance so it should be reused. I think you meant an abundant EventLoopGroup 
creation which each has its own Thread pool. 

{code:title=AsyncRpcClient.java|borderStyle=solid}
bootstrap.group(eventLoopGroup).channel(socketChannelClass)
.option(ChannelOption.ALLOCATOR, PooledByteBufAllocator.DEFAULT)
{code}

> Netty PooledByteBufAllocator cause OOM in some unit test
> 
>
> Key: HBASE-13097
> URL: https://issues.apache.org/jira/browse/HBASE-13097
> Project: HBase
>  Issue Type: Bug
>  Components: IPC/RPC, test
>Affects Versions: 2.0.0, 1.1.0
>Reporter: zhangduo
>
> In some unit tests(such as TestAcidGuarantees) we create multiple Connection 
> instance. If we use AsyncRpcClient, then there will be multiple netty 
> Bootstrap and every Bootstrap has its own PooledByteBufAllocator.
> I haven't read the code clearly but it uses some threadlocal technics and 
> jmap shows io.netty.buffer.PoolThreadCache$MemoryRegionCache$Entry is the 
> biggest things on Heap.
> See 
> https://builds.apache.org/job/HBase-TRUNK/6168/artifact/hbase-server/target/surefire-reports/org.apache.hadoop.hbase.TestAcidGuarantees-output.txt
> {noformat}
> 2015-02-24 23:50:29,704 WARN  [JvmPauseMonitor] 
> util.JvmPauseMonitor$Monitor(167): Detected pause in JVM or host machine (eg 
> GC): pause of approximately 20133ms
> GC pool 'PS MarkSweep' had collection(s): count=15 time=55525ms
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-12990:
-
Status: Patch Available  (was: Open)

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch, HBASE-12990.v5.patch, HBASE-12990.v6.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13093) Local mode HBase instance doesn't shut down.

2015-02-25 Thread Jurriaan Mous (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336651#comment-14336651
 ] 

Jurriaan Mous commented on HBASE-13093:
---

Patch looks good to me.

It is recommended by HashedWheelTimer to only use one per application so that 
is why it was static. 
http://netty.io/4.0/api/io/netty/util/HashedWheelTimer.html

> Local mode HBase instance doesn't shut down.
> 
>
> Key: HBASE-13093
> URL: https://issues.apache.org/jira/browse/HBASE-13093
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Elliott Clark
>Assignee: Andrey Stepachev
> Attachments: HBASE-13093.patch, HBASE-13093.v2.patch
>
>
> {code}bin/start-hbase.sh{code}
> {code}bin/stop-hbase.sh{code}
> That hangs forever. Here's the jstacks:
> {code}2015-02-24 16:37:55
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode):
> "Attach Listener" daemon prio=5 tid=0x7fb130813800 nid=0xfd07 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "DestroyJavaVM" prio=5 tid=0x7fb12ba7c800 nid=0x1303 waiting on condition 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "pool-5-thread-1" prio=5 tid=0x7fb12bb88800 nid=0x19903 waiting on 
> condition [0x000121a1b000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:461)
>   at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:360)
>   at java.lang.Thread.run(Thread.java:745)
> "HBase-Metrics2-1" daemon prio=5 tid=0x7fb12c04 nid=0x19703 waiting 
> on condition [0x000121918000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000724cc9780> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
>   at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> "snapshot-hfile-cleaner-cache-refresher" daemon prio=5 tid=0x7fb12bc91000 
> nid=0x18703 in Object.wait() [0x00012160f000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x000724caa588> (a java.util.TaskQueue)
>   at java.util.TimerThread.mainLoop(Timer.java:552)
>   - locked <0x000724caa588> (a java.util.TaskQueue)
>   at java.util.TimerThread.run(Timer.java:505)
> "snapshot-log-cleaner-cache-refresher" daemon prio=5 tid=0x7fb12bbc8000 
> nid=0x18503 in Object.wait() [0x00012150c000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x000724deb178> (a java.util.TaskQueue)
>   at java.util.TimerThread.mainLoop(Timer.java:552)
>   - locked <0x000724deb178> (a java.util.TaskQueue)
>   at java.util.TimerThread.run(Timer.java:505)
> "localhost:57343.activeMasterManager-EventThread" daemon prio=5 
> tid=0x7fb12c072000 nid=0x18303 waiting on condition [0x000121409000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000724f10150> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
>   at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> "localhost:57343.activeMasterManager-SendThread(fe80:0:0:0:0:0:0:1%1:2181)" 
> daemon prio=5 tid=0x7fb12c053000 nid=0x18103 waiting on condition 
> [0x000121306000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.zookeeper.client.StaticHostProvid

[jira] [Commented] (HBASE-13093) Local mode HBase instance doesn't shut down.

2015-02-25 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336660#comment-14336660
 ] 

Andrey Stepachev commented on HBASE-13093:
--

[~jurmous] thanks for reviewing. It seems that yes, HWT should be static.

> Local mode HBase instance doesn't shut down.
> 
>
> Key: HBASE-13093
> URL: https://issues.apache.org/jira/browse/HBASE-13093
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Elliott Clark
>Assignee: Andrey Stepachev
> Attachments: HBASE-13093.patch, HBASE-13093.v2.patch
>
>
> {code}bin/start-hbase.sh{code}
> {code}bin/stop-hbase.sh{code}
> That hangs forever. Here's the jstacks:
> {code}2015-02-24 16:37:55
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode):
> "Attach Listener" daemon prio=5 tid=0x7fb130813800 nid=0xfd07 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "DestroyJavaVM" prio=5 tid=0x7fb12ba7c800 nid=0x1303 waiting on condition 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "pool-5-thread-1" prio=5 tid=0x7fb12bb88800 nid=0x19903 waiting on 
> condition [0x000121a1b000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:461)
>   at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:360)
>   at java.lang.Thread.run(Thread.java:745)
> "HBase-Metrics2-1" daemon prio=5 tid=0x7fb12c04 nid=0x19703 waiting 
> on condition [0x000121918000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000724cc9780> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
>   at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> "snapshot-hfile-cleaner-cache-refresher" daemon prio=5 tid=0x7fb12bc91000 
> nid=0x18703 in Object.wait() [0x00012160f000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x000724caa588> (a java.util.TaskQueue)
>   at java.util.TimerThread.mainLoop(Timer.java:552)
>   - locked <0x000724caa588> (a java.util.TaskQueue)
>   at java.util.TimerThread.run(Timer.java:505)
> "snapshot-log-cleaner-cache-refresher" daemon prio=5 tid=0x7fb12bbc8000 
> nid=0x18503 in Object.wait() [0x00012150c000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x000724deb178> (a java.util.TaskQueue)
>   at java.util.TimerThread.mainLoop(Timer.java:552)
>   - locked <0x000724deb178> (a java.util.TaskQueue)
>   at java.util.TimerThread.run(Timer.java:505)
> "localhost:57343.activeMasterManager-EventThread" daemon prio=5 
> tid=0x7fb12c072000 nid=0x18303 waiting on condition [0x000121409000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000724f10150> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
>   at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> "localhost:57343.activeMasterManager-SendThread(fe80:0:0:0:0:0:0:1%1:2181)" 
> daemon prio=5 tid=0x7fb12c053000 nid=0x18103 waiting on condition 
> [0x000121306000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
>   at 
> org.apache.zookeeper.ClientCnxn$SendThread.startConnect(

[jira] [Commented] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336667#comment-14336667
 ] 

Andrey Stepachev commented on HBASE-12990:
--

that expected, lets wait for v6 test results.

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch, HBASE-12990.v5.patch, HBASE-12990.v6.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13093) Local mode HBase instance doesn't shut down.

2015-02-25 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336669#comment-14336669
 ] 

Andrey Stepachev commented on HBASE-13093:
--

unable to reproduce TestClientNoCluster failure, it passes locally.

> Local mode HBase instance doesn't shut down.
> 
>
> Key: HBASE-13093
> URL: https://issues.apache.org/jira/browse/HBASE-13093
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Elliott Clark
>Assignee: Andrey Stepachev
> Attachments: HBASE-13093.patch, HBASE-13093.v2.patch
>
>
> {code}bin/start-hbase.sh{code}
> {code}bin/stop-hbase.sh{code}
> That hangs forever. Here's the jstacks:
> {code}2015-02-24 16:37:55
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode):
> "Attach Listener" daemon prio=5 tid=0x7fb130813800 nid=0xfd07 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "DestroyJavaVM" prio=5 tid=0x7fb12ba7c800 nid=0x1303 waiting on condition 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "pool-5-thread-1" prio=5 tid=0x7fb12bb88800 nid=0x19903 waiting on 
> condition [0x000121a1b000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:461)
>   at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:360)
>   at java.lang.Thread.run(Thread.java:745)
> "HBase-Metrics2-1" daemon prio=5 tid=0x7fb12c04 nid=0x19703 waiting 
> on condition [0x000121918000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000724cc9780> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
>   at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> "snapshot-hfile-cleaner-cache-refresher" daemon prio=5 tid=0x7fb12bc91000 
> nid=0x18703 in Object.wait() [0x00012160f000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x000724caa588> (a java.util.TaskQueue)
>   at java.util.TimerThread.mainLoop(Timer.java:552)
>   - locked <0x000724caa588> (a java.util.TaskQueue)
>   at java.util.TimerThread.run(Timer.java:505)
> "snapshot-log-cleaner-cache-refresher" daemon prio=5 tid=0x7fb12bbc8000 
> nid=0x18503 in Object.wait() [0x00012150c000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x000724deb178> (a java.util.TaskQueue)
>   at java.util.TimerThread.mainLoop(Timer.java:552)
>   - locked <0x000724deb178> (a java.util.TaskQueue)
>   at java.util.TimerThread.run(Timer.java:505)
> "localhost:57343.activeMasterManager-EventThread" daemon prio=5 
> tid=0x7fb12c072000 nid=0x18303 waiting on condition [0x000121409000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000724f10150> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
>   at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> "localhost:57343.activeMasterManager-SendThread(fe80:0:0:0:0:0:0:1%1:2181)" 
> daemon prio=5 tid=0x7fb12c053000 nid=0x18103 waiting on condition 
> [0x000121306000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
>   at 
> org.apache.zookeeper.ClientCnxn$SendThread.startConnect(Client

[jira] [Commented] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336662#comment-14336662
 ] 

Hadoop QA commented on HBASE-12990:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12700760/HBASE-12990.v5.patch
  against master branch at commit c651271f5759f39f28209a50ab88a62d86b7.
  ATTACHMENT ID: 12700760

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 35 new 
or modified tests.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.hadoop.hbase.coprocessor.TestMasterObserver.testRegionTransitionOperations(TestMasterObserver.java:1604)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12961//console

This message is automatically generated.

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch, HBASE-12990.v5.patch, HBASE-12990.v6.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13093) Local mode HBase instance doesn't shut down.

2015-02-25 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-13093:
-
Attachment: HBASE-13093.v2.patch

retry

> Local mode HBase instance doesn't shut down.
> 
>
> Key: HBASE-13093
> URL: https://issues.apache.org/jira/browse/HBASE-13093
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Elliott Clark
>Assignee: Andrey Stepachev
> Attachments: HBASE-13093.patch, HBASE-13093.v2.patch, 
> HBASE-13093.v2.patch
>
>
> {code}bin/start-hbase.sh{code}
> {code}bin/stop-hbase.sh{code}
> That hangs forever. Here's the jstacks:
> {code}2015-02-24 16:37:55
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode):
> "Attach Listener" daemon prio=5 tid=0x7fb130813800 nid=0xfd07 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "DestroyJavaVM" prio=5 tid=0x7fb12ba7c800 nid=0x1303 waiting on condition 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "pool-5-thread-1" prio=5 tid=0x7fb12bb88800 nid=0x19903 waiting on 
> condition [0x000121a1b000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:461)
>   at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:360)
>   at java.lang.Thread.run(Thread.java:745)
> "HBase-Metrics2-1" daemon prio=5 tid=0x7fb12c04 nid=0x19703 waiting 
> on condition [0x000121918000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000724cc9780> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
>   at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> "snapshot-hfile-cleaner-cache-refresher" daemon prio=5 tid=0x7fb12bc91000 
> nid=0x18703 in Object.wait() [0x00012160f000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x000724caa588> (a java.util.TaskQueue)
>   at java.util.TimerThread.mainLoop(Timer.java:552)
>   - locked <0x000724caa588> (a java.util.TaskQueue)
>   at java.util.TimerThread.run(Timer.java:505)
> "snapshot-log-cleaner-cache-refresher" daemon prio=5 tid=0x7fb12bbc8000 
> nid=0x18503 in Object.wait() [0x00012150c000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x000724deb178> (a java.util.TaskQueue)
>   at java.util.TimerThread.mainLoop(Timer.java:552)
>   - locked <0x000724deb178> (a java.util.TaskQueue)
>   at java.util.TimerThread.run(Timer.java:505)
> "localhost:57343.activeMasterManager-EventThread" daemon prio=5 
> tid=0x7fb12c072000 nid=0x18303 waiting on condition [0x000121409000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000724f10150> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
>   at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> "localhost:57343.activeMasterManager-SendThread(fe80:0:0:0:0:0:0:1%1:2181)" 
> daemon prio=5 tid=0x7fb12c053000 nid=0x18103 waiting on condition 
> [0x000121306000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
>   at 
> org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
>   at org.apache.zookeeper.ClientCnxn

[jira] [Updated] (HBASE-13093) Local mode HBase instance doesn't shut down.

2015-02-25 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-13093:
-
Status: Patch Available  (was: Open)

> Local mode HBase instance doesn't shut down.
> 
>
> Key: HBASE-13093
> URL: https://issues.apache.org/jira/browse/HBASE-13093
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Elliott Clark
>Assignee: Andrey Stepachev
> Attachments: HBASE-13093.patch, HBASE-13093.v2.patch, 
> HBASE-13093.v2.patch
>
>
> {code}bin/start-hbase.sh{code}
> {code}bin/stop-hbase.sh{code}
> That hangs forever. Here's the jstacks:
> {code}2015-02-24 16:37:55
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode):
> "Attach Listener" daemon prio=5 tid=0x7fb130813800 nid=0xfd07 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "DestroyJavaVM" prio=5 tid=0x7fb12ba7c800 nid=0x1303 waiting on condition 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "pool-5-thread-1" prio=5 tid=0x7fb12bb88800 nid=0x19903 waiting on 
> condition [0x000121a1b000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:461)
>   at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:360)
>   at java.lang.Thread.run(Thread.java:745)
> "HBase-Metrics2-1" daemon prio=5 tid=0x7fb12c04 nid=0x19703 waiting 
> on condition [0x000121918000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000724cc9780> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
>   at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> "snapshot-hfile-cleaner-cache-refresher" daemon prio=5 tid=0x7fb12bc91000 
> nid=0x18703 in Object.wait() [0x00012160f000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x000724caa588> (a java.util.TaskQueue)
>   at java.util.TimerThread.mainLoop(Timer.java:552)
>   - locked <0x000724caa588> (a java.util.TaskQueue)
>   at java.util.TimerThread.run(Timer.java:505)
> "snapshot-log-cleaner-cache-refresher" daemon prio=5 tid=0x7fb12bbc8000 
> nid=0x18503 in Object.wait() [0x00012150c000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x000724deb178> (a java.util.TaskQueue)
>   at java.util.TimerThread.mainLoop(Timer.java:552)
>   - locked <0x000724deb178> (a java.util.TaskQueue)
>   at java.util.TimerThread.run(Timer.java:505)
> "localhost:57343.activeMasterManager-EventThread" daemon prio=5 
> tid=0x7fb12c072000 nid=0x18303 waiting on condition [0x000121409000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000724f10150> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
>   at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> "localhost:57343.activeMasterManager-SendThread(fe80:0:0:0:0:0:0:1%1:2181)" 
> daemon prio=5 tid=0x7fb12c053000 nid=0x18103 waiting on condition 
> [0x000121306000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
>   at 
> org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
>   at org.apache.zookeeper.ClientCnxn$Se

[jira] [Created] (HBASE-13100) Shell command to retrieve table splits

2015-02-25 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-13100:
---

 Summary: Shell command to retrieve table splits
 Key: HBASE-13100
 URL: https://issues.apache.org/jira/browse/HBASE-13100
 Project: HBase
  Issue Type: Improvement
  Components: shell
Reporter: Sean Busbey
Priority: Minor
 Fix For: 1.1.0


Add a shell command that returns the splits for a table.

Doing this yourself is currently possible, but involves going outside of the 
public api.

{code}
jruby-1.7.3 :012 > create 'example_table', 'f1', SPLITS => ["10", "20", "30", 
"40"]
0 row(s) in 0.5500 seconds

 => Hbase::Table - example_table 
jruby-1.7.3 :013 > 
get_table('example_table').table.get_all_region_locations.map do |location| 
org.apache.hadoop.hbase.util.Bytes::toStringBinary(location.get_region_info.get_start_key)
 end
0 row(s) in 0.0130 seconds

 => ["", "10", "20", "30", "40"] 
jruby-1.7.3 :014 > 
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13093) Local mode HBase instance doesn't shut down.

2015-02-25 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-13093:
-
Status: Open  (was: Patch Available)

> Local mode HBase instance doesn't shut down.
> 
>
> Key: HBASE-13093
> URL: https://issues.apache.org/jira/browse/HBASE-13093
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Elliott Clark
>Assignee: Andrey Stepachev
> Attachments: HBASE-13093.patch, HBASE-13093.v2.patch, 
> HBASE-13093.v2.patch
>
>
> {code}bin/start-hbase.sh{code}
> {code}bin/stop-hbase.sh{code}
> That hangs forever. Here's the jstacks:
> {code}2015-02-24 16:37:55
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode):
> "Attach Listener" daemon prio=5 tid=0x7fb130813800 nid=0xfd07 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "DestroyJavaVM" prio=5 tid=0x7fb12ba7c800 nid=0x1303 waiting on condition 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "pool-5-thread-1" prio=5 tid=0x7fb12bb88800 nid=0x19903 waiting on 
> condition [0x000121a1b000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> io.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:461)
>   at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:360)
>   at java.lang.Thread.run(Thread.java:745)
> "HBase-Metrics2-1" daemon prio=5 tid=0x7fb12c04 nid=0x19703 waiting 
> on condition [0x000121918000]
>java.lang.Thread.State: TIMED_WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000724cc9780> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1090)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:807)
>   at 
> java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> "snapshot-hfile-cleaner-cache-refresher" daemon prio=5 tid=0x7fb12bc91000 
> nid=0x18703 in Object.wait() [0x00012160f000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x000724caa588> (a java.util.TaskQueue)
>   at java.util.TimerThread.mainLoop(Timer.java:552)
>   - locked <0x000724caa588> (a java.util.TaskQueue)
>   at java.util.TimerThread.run(Timer.java:505)
> "snapshot-log-cleaner-cache-refresher" daemon prio=5 tid=0x7fb12bbc8000 
> nid=0x18503 in Object.wait() [0x00012150c000]
>java.lang.Thread.State: TIMED_WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x000724deb178> (a java.util.TaskQueue)
>   at java.util.TimerThread.mainLoop(Timer.java:552)
>   - locked <0x000724deb178> (a java.util.TaskQueue)
>   at java.util.TimerThread.run(Timer.java:505)
> "localhost:57343.activeMasterManager-EventThread" daemon prio=5 
> tid=0x7fb12c072000 nid=0x18303 waiting on condition [0x000121409000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x000724f10150> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
>   at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
>   at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
> "localhost:57343.activeMasterManager-SendThread(fe80:0:0:0:0:0:0:1%1:2181)" 
> daemon prio=5 tid=0x7fb12c053000 nid=0x18103 waiting on condition 
> [0x000121306000]
>java.lang.Thread.State: TIMED_WAITING (sleeping)
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
>   at 
> org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
>   at org.apache.zookeeper.ClientCnxn$Se

[jira] [Updated] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread Andrey Stepachev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrey Stepachev updated HBASE-12990:
-
Attachment: HBASE-12990.v7.patch

small addendum, region_status.rb still use MetaScanner, fixed that

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch, HBASE-12990.v5.patch, HBASE-12990.v6.patch, 
> HBASE-12990.v7.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13100) Shell command to retrieve table splits

2015-02-25 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13100:

Labels: beginner  (was: )

> Shell command to retrieve table splits
> --
>
> Key: HBASE-13100
> URL: https://issues.apache.org/jira/browse/HBASE-13100
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Sean Busbey
>Priority: Minor
>  Labels: beginner
> Fix For: 1.1.0
>
>
> Add a shell command that returns the splits for a table.
> Doing this yourself is currently possible, but involves going outside of the 
> public api.
> {code}
> jruby-1.7.3 :012 > create 'example_table', 'f1', SPLITS => ["10", "20", "30", 
> "40"]
> 0 row(s) in 0.5500 seconds
>  => Hbase::Table - example_table 
> jruby-1.7.3 :013 > 
> get_table('example_table').table.get_all_region_locations.map do |location| 
> org.apache.hadoop.hbase.util.Bytes::toStringBinary(location.get_region_info.get_start_key)
>  end
> 0 row(s) in 0.0130 seconds
>  => ["", "10", "20", "30", "40"] 
> jruby-1.7.3 :014 > 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12244) Upgrade to Surefire 2.18 as soon as it's released

2015-02-25 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark resolved HBASE-12244.
---
Resolution: Not a Problem

Seems like this got done somewhere else

> Upgrade to Surefire 2.18 as soon as it's released
> -
>
> Key: HBASE-12244
> URL: https://issues.apache.org/jira/browse/HBASE-12244
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>
> In our testing we keep running into 
> https://jira.codehaus.org/browse/SUREFIRE-1091
> My guess is that bug is also somewhat responsible fore our zombie tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13091) Split ZK Quorum on Master WebUI

2015-02-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13091:
---
Status: Open  (was: Patch Available)

Ok ,cancelling current patch

> Split ZK Quorum on Master WebUI
> ---
>
> Key: HBASE-13091
> URL: https://issues.apache.org/jira/browse/HBASE-13091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.10.1, 1.0.1
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
> Attachments: HBASE-13091-v0-trunk.patch, screenshot.png
>
>
> When using ZK servers or more, on the Master WebUI, this create a very large 
> column and so reduce a lot the others, splitting all the lines and creating 
> tall cells
> Splitting the ZK quorum with one per line will make it nicer and easier to 
> read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336830#comment-14336830
 ] 

Hadoop QA commented on HBASE-12990:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12700785/HBASE-12990.v6.patch
  against master branch at commit c651271f5759f39f28209a50ab88a62d86b7.
  ATTACHMENT ID: 12700785

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 35 new 
or modified tests.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.TestAcidGuarantees.testScanAtomicity(TestAcidGuarantees.java:354)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12962//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12962//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12962//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12962//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12962//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12962//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12962//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12962//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12962//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12962//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12962//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12962//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12962//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12962//console

This message is automatically generated.

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch, HBASE-12990.v5.patch, HBASE-12990.v6.patch, 
> HBASE-12990.v7.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13101) RPC throttling to protect against malicious clients

2015-02-25 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-13101:


 Summary: RPC throttling to protect against malicious clients
 Key: HBASE-13101
 URL: https://issues.apache.org/jira/browse/HBASE-13101
 Project: HBase
  Issue Type: Brainstorming
  Components: regionserver
Reporter: Nick Dimiduk


We should protect a region server from poorly designed/implemented 
clients/schemas that result in a "hotspot" which overwhelms a single machine. A 
client that creates a new connection for each request is an example of this 
case, where META gets completely flooded and kills the RS. Master diligently 
brings it up on another host, which sends the traffic along to the next victim, 
and will slowly bring down the whole cluster.

My suggestion is rate-limiting per client, implemented at the RPC level, but 
I'm looking for other suggestions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-13086) Show ZK root node on Master WebUI

2015-02-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reopened HBASE-13086:


This change may have introduced a regression in the 0.98 builds. See 
https://builds.apache.org/job/HBase-0.98/871/ and 
https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/829/

{noformat}
java.lang.NullPointerException: null
at 
org.apache.hadoop.hbase.tmpl.master.MasterStatusTmplImpl.renderNoFlush(MasterStatusTmplImpl.java:360)
at 
org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.renderNoFlush(MasterStatusTmpl.java:390)
at 
org.apache.hadoop.hbase.tmpl.master.MasterStatusTmpl.render(MasterStatusTmpl.java:380)
at 
org.apache.hadoop.hbase.master.TestMasterStatusServlet.testStatusTemplateWithServers(TestMasterStatusServlet.java:146)
{noformat}

We can fix it with an addendum, or revert and try again.

> Show ZK root node on Master WebUI
> -
>
> Key: HBASE-13086
> URL: https://issues.apache.org/jira/browse/HBASE-13086
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 13068.jpg, HBASE-13068.00.patch
>
>
> Currently we show a well-formed ZK quorum on the master webUI but not the 
> root node. Root node can be changed based on deployment, so we should list it 
> here explicitly. This information is helpful for folks playing around with 
> phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13091) Split ZK Quorum on Master WebUI

2015-02-25 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-13091:

Attachment: HBASE-13091-v1-trunk.patch

> Split ZK Quorum on Master WebUI
> ---
>
> Key: HBASE-13091
> URL: https://issues.apache.org/jira/browse/HBASE-13091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1, 0.98.10.1
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
> Attachments: HBASE-13091-v0-trunk.patch, HBASE-13091-v1-trunk.patch, 
> screenshot.png
>
>
> When using ZK servers or more, on the Master WebUI, this create a very large 
> column and so reduce a lot the others, splitting all the lines and creating 
> tall cells
> Splitting the ZK quorum with one per line will make it nicer and easier to 
> read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13091) Split ZK Quorum on Master WebUI

2015-02-25 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336834#comment-14336834
 ] 

Jean-Marc Spaggiari commented on HBASE-13091:
-

One per line patch coming in a minute...

> Split ZK Quorum on Master WebUI
> ---
>
> Key: HBASE-13091
> URL: https://issues.apache.org/jira/browse/HBASE-13091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1, 0.98.10.1
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
> Attachments: HBASE-13091-v0-trunk.patch, screenshot.png
>
>
> When using ZK servers or more, on the Master WebUI, this create a very large 
> column and so reduce a lot the others, splitting all the lines and creating 
> tall cells
> Splitting the ZK quorum with one per line will make it nicer and easier to 
> read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13091) Split ZK Quorum on Master WebUI

2015-02-25 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-13091:

Attachment: HBASE-13091-v1-trunk.patch

> Split ZK Quorum on Master WebUI
> ---
>
> Key: HBASE-13091
> URL: https://issues.apache.org/jira/browse/HBASE-13091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1, 0.98.10.1
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
> Attachments: HBASE-13091-v0-trunk.patch, HBASE-13091-v1-trunk.patch, 
> screenshot.png
>
>
> When using ZK servers or more, on the Master WebUI, this create a very large 
> column and so reduce a lot the others, splitting all the lines and creating 
> tall cells
> Splitting the ZK quorum with one per line will make it nicer and easier to 
> read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13086) Show ZK root node on Master WebUI

2015-02-25 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336844#comment-14336844
 ] 

Nick Dimiduk commented on HBASE-13086:
--

Sorry about that Andrew. I verified the master page locally... let me check 
again.

> Show ZK root node on Master WebUI
> -
>
> Key: HBASE-13086
> URL: https://issues.apache.org/jira/browse/HBASE-13086
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 13068.jpg, HBASE-13068.00.patch
>
>
> Currently we show a well-formed ZK quorum on the master webUI but not the 
> root node. Root node can be changed based on deployment, so we should list it 
> here explicitly. This information is helpful for folks playing around with 
> phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13091) Split ZK Quorum on Master WebUI

2015-02-25 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-13091:

Release Note:   (was: Here we go. Without the "+" option to fold or expand 
if more than X ZK servers...)

> Split ZK Quorum on Master WebUI
> ---
>
> Key: HBASE-13091
> URL: https://issues.apache.org/jira/browse/HBASE-13091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1, 0.98.10.1
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
> Attachments: HBASE-13091-v0-trunk.patch, HBASE-13091-v1-trunk.patch, 
> screenshot.png
>
>
> When using ZK servers or more, on the Master WebUI, this create a very large 
> column and so reduce a lot the others, splitting all the lines and creating 
> tall cells
> Splitting the ZK quorum with one per line will make it nicer and easier to 
> read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13091) Split ZK Quorum on Master WebUI

2015-02-25 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-13091:

Attachment: (was: HBASE-13091-v1-trunk.patch)

> Split ZK Quorum on Master WebUI
> ---
>
> Key: HBASE-13091
> URL: https://issues.apache.org/jira/browse/HBASE-13091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1, 0.98.10.1
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
> Attachments: HBASE-13091-v0-trunk.patch, screenshot.png
>
>
> When using ZK servers or more, on the Master WebUI, this create a very large 
> column and so reduce a lot the others, splitting all the lines and creating 
> tall cells
> Splitting the ZK quorum with one per line will make it nicer and easier to 
> read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13091) Split ZK Quorum on Master WebUI

2015-02-25 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336871#comment-14336871
 ] 

Jean-Marc Spaggiari commented on HBASE-13091:
-

New version attached, without the "+" option to fold or expand if more than X 
ZK servers...

> Split ZK Quorum on Master WebUI
> ---
>
> Key: HBASE-13091
> URL: https://issues.apache.org/jira/browse/HBASE-13091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1, 0.98.10.1
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
> Attachments: HBASE-13091-v0-trunk.patch, HBASE-13091-v1-trunk.patch, 
> screenshot.png
>
>
> When using ZK servers or more, on the Master WebUI, this create a very large 
> column and so reduce a lot the others, splitting all the lines and creating 
> tall cells
> Splitting the ZK quorum with one per line will make it nicer and easier to 
> read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13091) Split ZK Quorum on Master WebUI

2015-02-25 Thread Jean-Marc Spaggiari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Marc Spaggiari updated HBASE-13091:

Release Note: Here we go. Without the "+" option to fold or expand if more 
than X ZK servers...
  Status: Patch Available  (was: Open)

> Split ZK Quorum on Master WebUI
> ---
>
> Key: HBASE-13091
> URL: https://issues.apache.org/jira/browse/HBASE-13091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.10.1, 1.0.1
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
> Attachments: HBASE-13091-v0-trunk.patch, HBASE-13091-v1-trunk.patch, 
> screenshot.png
>
>
> When using ZK servers or more, on the Master WebUI, this create a very large 
> column and so reduce a lot the others, splitting all the lines and creating 
> tall cells
> Splitting the ZK quorum with one per line will make it nicer and easier to 
> read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336900#comment-14336900
 ] 

Hadoop QA commented on HBASE-12990:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12700790/HBASE-12990.v7.patch
  against master branch at commit c651271f5759f39f28209a50ab88a62d86b7.
  ATTACHMENT ID: 12700790

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 35 new 
or modified tests.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12964//console

This message is automatically generated.

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch, HBASE-12990.v5.patch, HBASE-12990.v6.patch, 
> HBASE-12990.v7.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336902#comment-14336902
 ] 

Andrey Stepachev commented on HBASE-12990:
--

yeah, v7 seems passes. 

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch, HBASE-12990.v5.patch, HBASE-12990.v6.patch, 
> HBASE-12990.v7.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13093) Local mode HBase instance doesn't shut down.

2015-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336901#comment-14336901
 ] 

Hadoop QA commented on HBASE-13093:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12700788/HBASE-13093.v2.patch
  against master branch at commit c651271f5759f39f28209a50ab88a62d86b7.
  ATTACHMENT ID: 12700788

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hbase.namespace.TestNamespaceAuditor.testRegionMerge(TestNamespaceAuditor.java:308)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12963//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12963//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12963//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12963//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12963//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12963//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12963//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12963//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12963//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12963//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12963//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12963//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12963//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12963//console

This message is automatically generated.

> Local mode HBase instance doesn't shut down.
> 
>
> Key: HBASE-13093
> URL: https://issues.apache.org/jira/browse/HBASE-13093
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Elliott Clark
>Assignee: Andrey Stepachev
> Attachments: HBASE-13093.patch, HBASE-13093.v2.patch, 
> HBASE-13093.v2.patch
>
>
> {code}bin/start-hbase.sh{code}
> {code}bin/stop-hbase.sh{code}
> That hangs forever. Here's the jstacks:
> {code}2015-02-24 16:37:55
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (24.60-b09 mixed mode):
> "Attach Listener" daemon prio=5 tid=0x7fb130813800 nid=0xfd07 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "DestroyJavaVM" prio=5 tid=0x7fb12ba7c800 nid=0x1303 waiting on condition 
> [0x000

[jira] [Commented] (HBASE-13101) RPC throttling to protect against malicious clients

2015-02-25 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336905#comment-14336905
 ] 

Dave Latham commented on HBASE-13101:
-

Related to HBASE-11598 ?

> RPC throttling to protect against malicious clients
> ---
>
> Key: HBASE-13101
> URL: https://issues.apache.org/jira/browse/HBASE-13101
> Project: HBase
>  Issue Type: Brainstorming
>  Components: regionserver
>Reporter: Nick Dimiduk
>
> We should protect a region server from poorly designed/implemented 
> clients/schemas that result in a "hotspot" which overwhelms a single machine. 
> A client that creates a new connection for each request is an example of this 
> case, where META gets completely flooded and kills the RS. Master diligently 
> brings it up on another host, which sends the traffic along to the next 
> victim, and will slowly bring down the whole cluster.
> My suggestion is rate-limiting per client, implemented at the RPC level, but 
> I'm looking for other suggestions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12972) Region, a supportable public/evolving subset of HRegion

2015-02-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12972?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-12972:
---
Attachment: HBASE-12972-0.98.patch

Clean up new interfaces per [~stack]'s suggestion

> Region, a supportable public/evolving subset of HRegion
> ---
>
> Key: HBASE-12972
> URL: https://issues.apache.org/jira/browse/HBASE-12972
> Project: HBase
>  Issue Type: New Feature
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: HBASE-12972-0.98.patch, HBASE-12972-0.98.patch, 
> HBASE-12972-0.98.patch
>
>
> On HBASE-12566, [~lhofhansl] proposed:
> {quote}
> Maybe we can have a {{Region}} interface that is to {{HRegion}} is what 
> {{Store}} is to {{HStore}}. Store marked with {{@InterfaceAudience.Private}} 
> but used in some coprocessor hooks.
> {quote}
> By example, now coprocessors have to reach into HRegion in order to 
> participate in row and region locking protocols, this is one area where the 
> functionality is legitimate for coprocessors but not for users, so an 
> in-between interface make sense.
> In addition we should promote {{Store}}'s interface audience to 
> LimitedPrivate(COPROC).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13099) Scans as in DynamoDB

2015-02-25 Thread Jonathan Lawlor (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336928#comment-14336928
 ] 

Jonathan Lawlor commented on HBASE-13099:
-

Interesting idea. This seems like it would make the client-server interaction 
during Scans much cleaner. Instead of assuming that the server understands the 
state that the Client thinks it is in, it would be much more explicit, along 
the lines of "I am in this state, give me these Results".

We would probably want the LastEvaluatedKey to be an extra parameter in the RPC 
response, rather than assumed to be the last KV in the Result. I think this 
would be preferable because it is possible that keys further down in the table 
were evaluated but filtered out. If we assume it to be the last KV in the 
Result we may find that we are constantly rescanning KV's that were previously 
excluded, only to find out that they will still be excluded.

Moving the state from the server to the client would require adding more 
parameters into the RPC response. As mentioned above, LastEvaluatedKey would 
likely be one of the parameters. Another parameter would likely be the MVCC 
read point that is currently maintained within the RegionScanner.

While this would make the interactions cleaner, I wonder how this would affect 
the performance of Scans. How I am currently imagining this (correct me if I'm 
wrong), it seems like we would incur an extra overhead on each scan due to the 
extra initialization required server side. On each scan RPC we would need to 
create a new RegionScanner, setup the key value heaps, seek to the correct row, 
and then potentially filter out the key values that we have already evaluated. 
This overhead is currently avoided by sending along the open scanner id from 
the client to the server so that the already setup scanner just continues where 
it left off.

If the move to client-side-state could be done without incurring any 
performance loss, I think this would be a great improvement that would make 
scans easier to understand.

> Scans as in DynamoDB
> 
>
> Key: HBASE-13099
> URL: https://issues.apache.org/jira/browse/HBASE-13099
> Project: HBase
>  Issue Type: Brainstorming
>  Components: Client, regionserver
>Reporter: Nicolas Liochon
>
> cc: [~saint@gmail.com] - as discussed offline.
> DynamoDB has a very simple way to manage scans server side:
> ??citation??
> The data returned from a Query or Scan operation is limited to 1 MB; this 
> means that if you scan a table that has more than 1 MB of data, you'll need 
> to perform another Scan operation to continue to the next 1 MB of data in the 
> table.
> If you query or scan for specific attributes that match values that amount to 
> more than 1 MB of data, you'll need to perform another Query or Scan request 
> for the next 1 MB of data. To do this, take the LastEvaluatedKey value from 
> the previous request, and use that value as the ExclusiveStartKey in the next 
> request. This will let you progressively query or scan for new data in 1 MB 
> increments.
> When the entire result set from a Query or Scan has been processed, the 
> LastEvaluatedKey is null. This indicates that the result set is complete 
> (i.e. the operation processed the “last page” of data).
> ??citation??
> This means that there is no state server side: the work is done client side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13102) Pseudo-distributed Mode is broken in 1.0.0

2015-02-25 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-13102:
-

 Summary: Pseudo-distributed Mode is broken in 1.0.0
 Key: HBASE-13102
 URL: https://issues.apache.org/jira/browse/HBASE-13102
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark


{code}
2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname 
of regionserver cannot be set to localhost in a fully-distributed setup because 
it won't be reachable. See "Getting Started" for more information.
2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class 
org.apache.hadoop.hbase.master.HMaster
at 
org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051)
at 
org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198)
at 
org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065)
Caused by: java.io.IOException: The hostname of regionserver cannot be set to 
localhost in a fully-distributed setup because it won't be reachable. See 
"Getting Started" for more information.
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198)
at 
org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500)
at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046)
... 5 more

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13102) Pseudo-distributed Mode is broken in 1.0.0

2015-02-25 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13102:
--
Affects Version/s: 1.1.0
   1.0.0

> Pseudo-distributed Mode is broken in 1.0.0
> --
>
> Key: HBASE-13102
> URL: https://issues.apache.org/jira/browse/HBASE-13102
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Elliott Clark
>
> {code}
> 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname 
> of regionserver cannot be set to localhost in a fully-distributed setup 
> because it won't be reachable. See "Getting Started" for more information.
> 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>   at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065)
> Caused by: java.io.IOException: The hostname of regionserver cannot be set to 
> localhost in a fully-distributed setup because it won't be reachable. See 
> "Getting Started" for more information.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500)
>   at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13087) branch-1 isn't rolling upgradable from 0.98

2015-02-25 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336978#comment-14336978
 ] 

Elliott Clark commented on HBASE-13087:
---

So this doesn't seem to happen in local only mode. Most likely this requires a 
server to be holding the meta. So here are some repro steps that worked for me.

{code}
git checkout 0.98
git pull
mvn clean package -DskipTests
{code}

Set this into hbase-site.xml
{code}

  hbase.cluster.distributed
  true

{code}

Now on tab one:
{code}
bin/hbase zookeeper
{code}

On tab two:
{code}
bin/hbase regionserver start
{code}

On Tab three:
{code}
bin/hbase master start
{code}

Wait for the master to fully come up and assign all regions ( I checked the web 
ui too ).

Ctrl-c the master.

On Tab three:
{code}
git checkout branch-1
git pull
{code}

Now here I had to remove some code in RPCServices.java ( see HBASE-13102 )
so I removed line 787-794:
{code}
if (mode == HConstants.CLUSTER_IS_DISTRIBUTED && 
hostname.equals(HConstants.LOCALHOST)) {
  String msg =
  "The hostname of regionserver cannot be set to localhost "
  + "in a fully-distributed setup because it won't be reachable. "
  + "See \"Getting Started\" for more information.";
  LOG.fatal(msg);
  throw new IOException(msg);
}
{code}

Still in tab three:

{code}
mvn clean package -DskipTests
bin/hbase master start
{code}

That produced the error for me.

> branch-1 isn't rolling upgradable from 0.98
> ---
>
> Key: HBASE-13087
> URL: https://issues.apache.org/jira/browse/HBASE-13087
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Rajesh Nishtala
>Priority: Blocker
> Fix For: 2.0.0, 1.1.0
>
>
> {code}org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: 
> Failed 1 action: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family table does not exist in region hbase:meta,,1.1588230740 in table 
> 'hbase:meta', {TABLE_ATTRIBUTES => {IS_META => 'true', coprocessor$1 => 
> '|org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint|536870911|'}, 
> {NAME => 'info', BLOOMFILTER => 'NONE', VERSIONS => '10', IN_MEMORY => 
> 'true', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 
> 'FOREVER', COMPRESSION => 'NONE', MIN_VERSIONS => '0', BLOCKCACHE => 'true', 
> BLOCKSIZE => '8192', REPLICATION_SCOPE => '0'}
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4513)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3687)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3576)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:30816)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2029)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> : 1 time, 
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1700(AsyncProcess.java:208)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1689)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:183)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1404)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1017)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.put(MetaTableAccessor.java:1123)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.putToMetaTable(MetaTableAccessor.java:1113)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.updateTableState(MetaTableAccessor.java:1436)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.updateTableState(MetaTableAccessor.java:948)
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.writeMetaState(TableStateManager.java:195)
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.setTableState(TableStateManager.java:69)
>   at 
> org.apache.hadoop.hbase.master.AssignmentManager.setEnabledTable(AssignmentManager.java:3427)
>   at org.apache.hadoop.hbase.master.HMaster.assignMeta(HMaster.java:903)
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterI

[jira] [Commented] (HBASE-12990) MetaScanner should be replaced by MetaTableAccessor

2015-02-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14336992#comment-14336992
 ] 

stack commented on HBASE-12990:
---

@zhangduo fingered the TestAcidGuarantees failure as OOME in another issue. Let 
me scan what you posted on RB [~octo47]  Nice one.

> MetaScanner should be replaced by MetaTableAccessor
> ---
>
> Key: HBASE-12990
> URL: https://issues.apache.org/jira/browse/HBASE-12990
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Andrey Stepachev
>Assignee: Andrey Stepachev
> Attachments: HBASE-12990.patch, HBASE-12990.v2.patch, 
> HBASE-12990.v3.patch, HBASE-12990.v4.patch, HBASE-12990.v5.patch, 
> HBASE-12990.v5.patch, HBASE-12990.v5.patch, HBASE-12990.v6.patch, 
> HBASE-12990.v7.patch
>
>
> MetaScanner and MetaTableAccessor do similar things, but seems they tend to 
> diverge. Let's have only one thing to enquiry META.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13086) Show ZK root node on Master WebUI

2015-02-25 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13086:
-
Status: Patch Available  (was: Reopened)

> Show ZK root node on Master WebUI
> -
>
> Key: HBASE-13086
> URL: https://issues.apache.org/jira/browse/HBASE-13086
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 13068.jpg, HBASE-13068.00.patch, 
> HBASE-13086-0.98.addendum0.patch
>
>
> Currently we show a well-formed ZK quorum on the master webUI but not the 
> root node. Root node can be changed based on deployment, so we should list it 
> here explicitly. This information is helpful for folks playing around with 
> phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13086) Show ZK root node on Master WebUI

2015-02-25 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13086:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

This was the only related test that failed the 0.98 runs, so I've pushed. 
Thanks [~apurtell].

> Show ZK root node on Master WebUI
> -
>
> Key: HBASE-13086
> URL: https://issues.apache.org/jira/browse/HBASE-13086
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 13068.jpg, HBASE-13068.00.patch, 
> HBASE-13086-0.98.addendum0.patch
>
>
> Currently we show a well-formed ZK quorum on the master webUI but not the 
> root node. Root node can be changed based on deployment, so we should list it 
> here explicitly. This information is helpful for folks playing around with 
> phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13086) Show ZK root node on Master WebUI

2015-02-25 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-13086:
-
Attachment: HBASE-13086-0.98.addendum0.patch

Patch for 0.98 that wires the mock object for both access paths to the 
ZooKeeperWatcher object.

> Show ZK root node on Master WebUI
> -
>
> Key: HBASE-13086
> URL: https://issues.apache.org/jira/browse/HBASE-13086
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 13068.jpg, HBASE-13068.00.patch, 
> HBASE-13086-0.98.addendum0.patch
>
>
> Currently we show a well-formed ZK quorum on the master webUI but not the 
> root node. Root node can be changed based on deployment, so we should list it 
> here explicitly. This information is helpful for folks playing around with 
> phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13099) Scans as in DynamoDB

2015-02-25 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337017#comment-14337017
 ] 

Enis Soztutar commented on HBASE-13099:
---

I think we may have to keep at least some state in the server, even if we do a 
cell-based scanner. Our contract is per-row atomicity, so we have to keep track 
of: 
1. read point while scanning inside a row. 
2. low watermark for the read points across all "open" scanners for the region. 

(1) can even be extended to be a region based contract if we consider atomic 
updates cross-row using the MultiRowMutationEndpoint. (2) is needed for 
effectively getting rid of seqId's of cells in hfiles. 

We keep (1) in the server side right now, and we use the row-based scanner 
contract for (1). The client either gets the whole row, or not. The scanner can 
be restarted across rows, which changes the scanner read point, but it is fine 
since there is no guarantees across rows for visibility (excluding single 
region multi-row transactions). 

>From a semantics point of view, (1) can be achieved with sending the read 
>point to the client everytime a scan is started within a region. The client 
>will keep track of 1 read point per region. Any subsequent scans performed 
>from the client in the region will also send this read point to the server so 
>that the scan does not see partial data. (2) can be solved by either not 
>deleting seqId's of cells in hfiles (which we do to optimize disk usage), or 
>keeping track of all open scanners' read points which requires still some 
>state (even though very small) in the server. 

> Scans as in DynamoDB
> 
>
> Key: HBASE-13099
> URL: https://issues.apache.org/jira/browse/HBASE-13099
> Project: HBase
>  Issue Type: Brainstorming
>  Components: Client, regionserver
>Reporter: Nicolas Liochon
>
> cc: [~saint@gmail.com] - as discussed offline.
> DynamoDB has a very simple way to manage scans server side:
> ??citation??
> The data returned from a Query or Scan operation is limited to 1 MB; this 
> means that if you scan a table that has more than 1 MB of data, you'll need 
> to perform another Scan operation to continue to the next 1 MB of data in the 
> table.
> If you query or scan for specific attributes that match values that amount to 
> more than 1 MB of data, you'll need to perform another Query or Scan request 
> for the next 1 MB of data. To do this, take the LastEvaluatedKey value from 
> the previous request, and use that value as the ExclusiveStartKey in the next 
> request. This will let you progressively query or scan for new data in 1 MB 
> increments.
> When the entire result set from a Query or Scan has been processed, the 
> LastEvaluatedKey is null. This indicates that the result set is complete 
> (i.e. the operation processed the “last page” of data).
> ??citation??
> This means that there is no state server side: the work is done client side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13103) [ergonomics] add shell,API to "reshape" a table

2015-02-25 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-13103:


 Summary: [ergonomics] add shell,API to "reshape" a table
 Key: HBASE-13103
 URL: https://issues.apache.org/jira/browse/HBASE-13103
 Project: HBase
  Issue Type: Brainstorming
  Components: Usability
Reporter: Nick Dimiduk


Often enough, folks miss-judge split points or otherwise end up with a 
suboptimal number of regions. We should have an automated, reliable way to 
"reshape" or "balance" a table's region boundaries. This would be for tables 
that contain existing data. This might look like:

{noformat}
Admin#reshapeTable(TableName, int numSplits);
{noformat}

or from the shell:

{noformat}
> reshape TABLE, numSplits
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13098) HBase Connection Control

2015-02-25 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-13098:
--
Fix Version/s: (was: 1.0.1)

> HBase Connection Control
> 
>
> Key: HBASE-13098
> URL: https://issues.apache.org/jira/browse/HBASE-13098
> Project: HBase
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.98.10
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.1.0, 0.98.11
>
> Attachments: HBASE-13098.patch, HBase Connection Control.pdf
>
>
> It is desirable to set the limit on the number of client connections 
> permitted to the HBase server by controlling with certain system 
> variables/parameters. Too many connections to the HBase server imply too many 
> queries and MR jobs running on HBase. This can slow down the performance of 
> the system and lead to denial of service. Hence such connections need to be 
> controlled. Using too many connections may just cause thrashing rather than 
> get more useful work done.
> This is kind off inspired from 
> http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/#.VO2JXXyUe9y



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13102) Pseudo-distributed Mode is broken in 1.0.0

2015-02-25 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-13102:
--
Fix Version/s: 1.1.0
   1.0.1
   2.0.0

> Pseudo-distributed Mode is broken in 1.0.0
> --
>
> Key: HBASE-13102
> URL: https://issues.apache.org/jira/browse/HBASE-13102
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Elliott Clark
> Fix For: 2.0.0, 1.0.1, 1.1.0
>
>
> {code}
> 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname 
> of regionserver cannot be set to localhost in a fully-distributed setup 
> because it won't be reachable. See "Getting Started" for more information.
> 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>   at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065)
> Caused by: java.io.IOException: The hostname of regionserver cannot be set to 
> localhost in a fully-distributed setup because it won't be reachable. See 
> "Getting Started" for more information.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500)
>   at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13102) Pseudo-distributed Mode is broken in 1.0.0

2015-02-25 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337037#comment-14337037
 ] 

Enis Soztutar commented on HBASE-13102:
---

Undo HBASE-12263 ? 

> Pseudo-distributed Mode is broken in 1.0.0
> --
>
> Key: HBASE-13102
> URL: https://issues.apache.org/jira/browse/HBASE-13102
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Elliott Clark
> Fix For: 2.0.0, 1.0.1, 1.1.0
>
>
> {code}
> 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname 
> of regionserver cannot be set to localhost in a fully-distributed setup 
> because it won't be reachable. See "Getting Started" for more information.
> 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>   at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065)
> Caused by: java.io.IOException: The hostname of regionserver cannot be set to 
> localhost in a fully-distributed setup because it won't be reachable. See 
> "Getting Started" for more information.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500)
>   at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13102) Pseudo-distributed Mode is broken in 1.0.0

2015-02-25 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337040#comment-14337040
 ] 

Enis Soztutar commented on HBASE-13102:
---

[~liushaohui] any input ? 

> Pseudo-distributed Mode is broken in 1.0.0
> --
>
> Key: HBASE-13102
> URL: https://issues.apache.org/jira/browse/HBASE-13102
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Elliott Clark
> Fix For: 2.0.0, 1.0.1, 1.1.0
>
>
> {code}
> 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname 
> of regionserver cannot be set to localhost in a fully-distributed setup 
> because it won't be reachable. See "Getting Started" for more information.
> 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>   at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065)
> Caused by: java.io.IOException: The hostname of regionserver cannot be set to 
> localhost in a fully-distributed setup because it won't be reachable. See 
> "Getting Started" for more information.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500)
>   at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13103) [ergonomics] add shell,API to "reshape" a table

2015-02-25 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337062#comment-14337062
 ] 

Jean-Marc Spaggiari commented on HBASE-13103:
-

I love the idea, but do you see any way to do that online? 

Using merges and splits we might be able to do that.

Do you have any design in mind?

> [ergonomics] add shell,API to "reshape" a table
> ---
>
> Key: HBASE-13103
> URL: https://issues.apache.org/jira/browse/HBASE-13103
> Project: HBase
>  Issue Type: Brainstorming
>  Components: Usability
>Reporter: Nick Dimiduk
>
> Often enough, folks miss-judge split points or otherwise end up with a 
> suboptimal number of regions. We should have an automated, reliable way to 
> "reshape" or "balance" a table's region boundaries. This would be for tables 
> that contain existing data. This might look like:
> {noformat}
> Admin#reshapeTable(TableName, int numSplits);
> {noformat}
> or from the shell:
> {noformat}
> > reshape TABLE, numSplits
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13103) [ergonomics] add shell,API to "reshape" a table

2015-02-25 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337084#comment-14337084
 ] 

Mikhail Antonov commented on HBASE-13103:
-

Online - you mean table-wise, or region-wise? I would think, this might work in 
2 steps:

 - command runs and generates "reshaping plan", which is series or split/merge 
commands recommended. Then they could be either confirmed by admin (and pushed 
to execution), or exported as a script for further review?

> [ergonomics] add shell,API to "reshape" a table
> ---
>
> Key: HBASE-13103
> URL: https://issues.apache.org/jira/browse/HBASE-13103
> Project: HBase
>  Issue Type: Brainstorming
>  Components: Usability
>Reporter: Nick Dimiduk
>
> Often enough, folks miss-judge split points or otherwise end up with a 
> suboptimal number of regions. We should have an automated, reliable way to 
> "reshape" or "balance" a table's region boundaries. This would be for tables 
> that contain existing data. This might look like:
> {noformat}
> Admin#reshapeTable(TableName, int numSplits);
> {noformat}
> or from the shell:
> {noformat}
> > reshape TABLE, numSplits
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13103) [ergonomics] add shell,API to "reshape" a table

2015-02-25 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337086#comment-14337086
 ] 

Mikhail Antonov commented on HBASE-13103:
-

My guess would be that if a command executes a number of splits and merges 
behind the scene, production admins would want to review and approve them first?

> [ergonomics] add shell,API to "reshape" a table
> ---
>
> Key: HBASE-13103
> URL: https://issues.apache.org/jira/browse/HBASE-13103
> Project: HBase
>  Issue Type: Brainstorming
>  Components: Usability
>Reporter: Nick Dimiduk
>
> Often enough, folks miss-judge split points or otherwise end up with a 
> suboptimal number of regions. We should have an automated, reliable way to 
> "reshape" or "balance" a table's region boundaries. This would be for tables 
> that contain existing data. This might look like:
> {noformat}
> Admin#reshapeTable(TableName, int numSplits);
> {noformat}
> or from the shell:
> {noformat}
> > reshape TABLE, numSplits
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13091) Split ZK Quorum on Master WebUI

2015-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337088#comment-14337088
 ] 

Hadoop QA commented on HBASE-13091:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12700816/HBASE-13091-v1-trunk.patch
  against master branch at commit c651271f5759f39f28209a50ab88a62d86b7.
  ATTACHMENT ID: 12700816

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12965//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12965//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12965//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12965//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12965//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12965//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12965//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12965//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12965//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12965//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12965//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12965//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12965//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12965//console

This message is automatically generated.

> Split ZK Quorum on Master WebUI
> ---
>
> Key: HBASE-13091
> URL: https://issues.apache.org/jira/browse/HBASE-13091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1, 0.98.10.1
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
> Attachments: HBASE-13091-v0-trunk.patch, HBASE-13091-v1-trunk.patch, 
> screenshot.png
>
>
> When using ZK servers or more, on the Master WebUI, this create a very large 
> column and so reduce a lot the others, splitting all the lines and creating 
> tall cells
> Splitting the ZK quorum with one per line will make it nicer and easier to 
> read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12949) Scanner can be stuck in infinite loop if the HFile is corrupted

2015-02-25 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337221#comment-14337221
 ] 

Jerry He commented on HBASE-12949:
--

Hi, [~stack]

Thanks for getting back to this.
BufferUnderflowException and IllegalStateException are both subclass of 
RuntimeException. Unchecked, no better, no worse.  At least we are using  
IllegalStateException currently.

Also, BufferUnderflowException is where we can not even read the length. The 
other is where we can read the length, but it is bad. 
Here is the case. 
Two KVs:  

With the patch, we fail at the first bad kv.
Without the patch: 
1. we can stuck in the first bad kv. 
2. BufferUnderflowException in the next kv. This is what happened in the 
HFilePrettyPrinter, which uses a ad hoc scanner.

> Scanner can be stuck in infinite loop if the HFile is corrupted
> ---
>
> Key: HBASE-12949
> URL: https://issues.apache.org/jira/browse/HBASE-12949
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.3, 0.98.10
>Reporter: Jerry He
> Attachments: HBASE-12949-master-v2 (1).patch, 
> HBASE-12949-master-v2.patch, HBASE-12949-master-v2.patch, 
> HBASE-12949-master.patch
>
>
> We've encountered problem where compaction hangs and never completes.
> After looking into it further, we found that the compaction scanner was stuck 
> in a infinite loop. See stack below.
> {noformat}
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:296)
> org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:257)
> org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:697)
> org.apache.hadoop.hbase.regionserver.StoreScanner.seekToNextRow(StoreScanner.java:672)
> org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:529)
> org.apache.hadoop.hbase.regionserver.compactions.Compactor.performCompaction(Compactor.java:223)
> {noformat}
> We identified the hfile that seems to be corrupted.  Using HFile tool shows 
> the following:
> {noformat}
> [biadmin@hdtest009 bin]$ hbase org.apache.hadoop.hbase.io.hfile.HFile -v -k 
> -m -f 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> 15/01/23 11:53:17 INFO Configuration.deprecation: hadoop.native.lib is 
> deprecated. Instead, use io.native.lib.available
> 15/01/23 11:53:18 INFO util.ChecksumType: Checksum using 
> org.apache.hadoop.util.PureJavaCrc32
> 15/01/23 11:53:18 INFO util.ChecksumType: Checksum can use 
> org.apache.hadoop.util.PureJavaCrc32C
> 15/01/23 11:53:18 INFO Configuration.deprecation: fs.default.name is 
> deprecated. Instead, use fs.defaultFS
> Scanning -> 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> WARNING, previous row is greater then current row
> filename -> 
> /user/biadmin/CUMMINS_INSITE_V1/7106432d294dd844be15996ccbf2ba84/attributes/f1a7e3113c2c4047ac1fc8fbcb41d8b7
> previous -> 
> \x00/20110203-094231205-79442793-1410161293068203000\x0Aattributes16794406\x00\x00\x01\x00\x00\x00\x00\x00\x00
> current  ->
> Exception in thread "main" java.nio.BufferUnderflowException
> at java.nio.Buffer.nextGetIndex(Buffer.java:489)
> at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:347)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.readKeyValueLen(HFileReaderV2.java:856)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:768)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.scanKeysValues(HFilePrettyPrinter.java:362)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.processFile(HFilePrettyPrinter.java:262)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.run(HFilePrettyPrinter.java:220)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.io.hfile.HFilePrettyPrinter.main(HFilePrettyPrinter.java:539)
> at org.apache.hadoop.hbase.io.hfile.HFile.main(HFile.java:802)
> {noformat}
> Turning on Java Assert shows the following:
> {noformat}
> Exception in thread "main" java.lang.AssertionError: Key 
> 20110203-094231205-79442793-1410161293068203000/attributes:16794406/1099511627776/Minimum/vlen=15/mvcc=0
>  followed by a smaller key //0/Minimum/vlen=0/mvcc=0 in cf attributes
> at 
> org.apache.hadoop.hbase.regionserver.StoreScanner.checkScanOrder(StoreScanner.java:672)
> {noformat}
> It shows that the hfile seems to be corrupted -- the keys don't seem to be 
> right.
> But Scanner is not able to give a meaningful error, but stuck in an infinite 
> loop in here:
> {code}
> KeyValueHeap.generalizedSeek()
> while ((scanner = heap.poll(

[jira] [Comment Edited] (HBASE-13102) Pseudo-distributed Mode is broken in 1.0.0

2015-02-25 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337238#comment-14337238
 ] 

Esteban Gutierrez edited comment on HBASE-13102 at 2/25/15 9:26 PM:


[~enis], we have been discussing exactly the same issue internally, the 
quickest thing to do is to revert and come with a better solution for 
HBASE-12263. HBASE-12954 should do the trick I think.


was (Author: esteban):
[~enis] we have been discussing exactly the same issue, the quickest thing to 
do is to revert and come with a better solution for HBASE-12263. HBASE-12954 
should do the trick I think.

> Pseudo-distributed Mode is broken in 1.0.0
> --
>
> Key: HBASE-13102
> URL: https://issues.apache.org/jira/browse/HBASE-13102
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Elliott Clark
> Fix For: 2.0.0, 1.0.1, 1.1.0
>
>
> {code}
> 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname 
> of regionserver cannot be set to localhost in a fully-distributed setup 
> because it won't be reachable. See "Getting Started" for more information.
> 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>   at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065)
> Caused by: java.io.IOException: The hostname of regionserver cannot be set to 
> localhost in a fully-distributed setup because it won't be reachable. See 
> "Getting Started" for more information.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500)
>   at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13102) Pseudo-distributed Mode is broken in 1.0.0

2015-02-25 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337238#comment-14337238
 ] 

Esteban Gutierrez commented on HBASE-13102:
---

[~enis] we have been discussing exactly the same issue, the quickest thing to 
do is to revert and come with a better solution for HBASE-12263. HBASE-12954 
should do the trick I think.

> Pseudo-distributed Mode is broken in 1.0.0
> --
>
> Key: HBASE-13102
> URL: https://issues.apache.org/jira/browse/HBASE-13102
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Elliott Clark
> Fix For: 2.0.0, 1.0.1, 1.1.0
>
>
> {code}
> 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname 
> of regionserver cannot be set to localhost in a fully-distributed setup 
> because it won't be reachable. See "Getting Started" for more information.
> 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>   at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065)
> Caused by: java.io.IOException: The hostname of regionserver cannot be set to 
> localhost in a fully-distributed setup because it won't be reachable. See 
> "Getting Started" for more information.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500)
>   at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0

2015-02-25 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13102:
--
Summary: Fix Pseudo-distributed Mode which was broken in 1.0.0  (was: 
Pseudo-distributed Mode is broken in 1.0.0)

> Fix Pseudo-distributed Mode which was broken in 1.0.0
> -
>
> Key: HBASE-13102
> URL: https://issues.apache.org/jira/browse/HBASE-13102
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Elliott Clark
> Fix For: 2.0.0, 1.0.1, 1.1.0
>
>
> {code}
> 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname 
> of regionserver cannot be set to localhost in a fully-distributed setup 
> because it won't be reachable. See "Getting Started" for more information.
> 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>   at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065)
> Caused by: java.io.IOException: The hostname of regionserver cannot be set to 
> localhost in a fully-distributed setup because it won't be reachable. See 
> "Getting Started" for more information.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500)
>   at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13103) [ergonomics] add shell,API to "reshape" a table

2015-02-25 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337249#comment-14337249
 ] 

Enis Soztutar commented on HBASE-13103:
---

Related, Accumulo has merge command which merges a range into a single tablet. 
We can do this and the merge range together for max flexibility. 

> [ergonomics] add shell,API to "reshape" a table
> ---
>
> Key: HBASE-13103
> URL: https://issues.apache.org/jira/browse/HBASE-13103
> Project: HBase
>  Issue Type: Brainstorming
>  Components: Usability
>Reporter: Nick Dimiduk
>
> Often enough, folks miss-judge split points or otherwise end up with a 
> suboptimal number of regions. We should have an automated, reliable way to 
> "reshape" or "balance" a table's region boundaries. This would be for tables 
> that contain existing data. This might look like:
> {noformat}
> Admin#reshapeTable(TableName, int numSplits);
> {noformat}
> or from the shell:
> {noformat}
> > reshape TABLE, numSplits
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0

2015-02-25 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark reassigned HBASE-13102:
-

Assignee: Elliott Clark

> Fix Pseudo-distributed Mode which was broken in 1.0.0
> -
>
> Key: HBASE-13102
> URL: https://issues.apache.org/jira/browse/HBASE-13102
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.0.1, 1.1.0
>
> Attachments: HBASE-13102.patch
>
>
> {code}
> 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname 
> of regionserver cannot be set to localhost in a fully-distributed setup 
> because it won't be reachable. See "Getting Started" for more information.
> 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>   at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065)
> Caused by: java.io.IOException: The hostname of regionserver cannot be set to 
> localhost in a fully-distributed setup because it won't be reachable. See 
> "Getting Started" for more information.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500)
>   at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0

2015-02-25 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13102:
--
Status: Patch Available  (was: Open)

> Fix Pseudo-distributed Mode which was broken in 1.0.0
> -
>
> Key: HBASE-13102
> URL: https://issues.apache.org/jira/browse/HBASE-13102
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.0.1, 1.1.0
>
> Attachments: HBASE-13102.patch
>
>
> {code}
> 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname 
> of regionserver cannot be set to localhost in a fully-distributed setup 
> because it won't be reachable. See "Getting Started" for more information.
> 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>   at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065)
> Caused by: java.io.IOException: The hostname of regionserver cannot be set to 
> localhost in a fully-distributed setup because it won't be reachable. See 
> "Getting Started" for more information.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500)
>   at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0

2015-02-25 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13102:
--
Attachment: HBASE-13102.patch

This is what worked for me while debugging a different issue.
However it will mean that a better solution needs to be found like [~esteban] 
suggests.

> Fix Pseudo-distributed Mode which was broken in 1.0.0
> -
>
> Key: HBASE-13102
> URL: https://issues.apache.org/jira/browse/HBASE-13102
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Elliott Clark
> Fix For: 2.0.0, 1.0.1, 1.1.0
>
> Attachments: HBASE-13102.patch
>
>
> {code}
> 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname 
> of regionserver cannot be set to localhost in a fully-distributed setup 
> because it won't be reachable. See "Getting Started" for more information.
> 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>   at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065)
> Caused by: java.io.IOException: The hostname of regionserver cannot be set to 
> localhost in a fully-distributed setup because it won't be reachable. See 
> "Getting Started" for more information.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500)
>   at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0

2015-02-25 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337252#comment-14337252
 ] 

Esteban Gutierrez commented on HBASE-13102:
---

+1

> Fix Pseudo-distributed Mode which was broken in 1.0.0
> -
>
> Key: HBASE-13102
> URL: https://issues.apache.org/jira/browse/HBASE-13102
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.0.1, 1.1.0
>
> Attachments: HBASE-13102.patch
>
>
> {code}
> 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname 
> of regionserver cannot be set to localhost in a fully-distributed setup 
> because it won't be reachable. See "Getting Started" for more information.
> 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>   at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065)
> Caused by: java.io.IOException: The hostname of regionserver cannot be set to 
> localhost in a fully-distributed setup because it won't be reachable. See 
> "Getting Started" for more information.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500)
>   at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13086) Show ZK root node on Master WebUI

2015-02-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337272#comment-14337272
 ] 

Hadoop QA commented on HBASE-13086:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12700839/HBASE-13086-0.98.addendum0.patch
  against 0.98 branch at commit c651271f5759f39f28209a50ab88a62d86b7.
  ATTACHMENT ID: 12700839

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.
{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
25 warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/12966//console

This message is automatically generated.

> Show ZK root node on Master WebUI
> -
>
> Key: HBASE-13086
> URL: https://issues.apache.org/jira/browse/HBASE-13086
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 13068.jpg, HBASE-13068.00.patch, 
> HBASE-13086-0.98.addendum0.patch
>
>
> Currently we show a well-formed ZK quorum on the master webUI but not the 
> root node. Root node can be changed based on deployment, so we should list it 
> here explicitly. This information is helpful for folks playing around with 
> phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13104) ZooKeeper session timeout cannot be changed for standalone HBase

2015-02-25 Thread Alex Araujo (JIRA)
Alex Araujo created HBASE-13104:
---

 Summary: ZooKeeper session timeout cannot be changed for 
standalone HBase
 Key: HBASE-13104
 URL: https://issues.apache.org/jira/browse/HBASE-13104
 Project: HBase
  Issue Type: Bug
  Components: master
Affects Versions: 0.98.10.1
Reporter: Alex Araujo


It's not possible to increase the ZooKeeper session timeout in standalone HBase 
due to a hardcoded 10s timeout in HMasterCommandLine:

https://github.com/apache/hbase/blob/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L176

In trunk you can append .localHBaseCluster to the ZK session timeout property 
name to change the timeout:

https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L169-171

We should allow changing the timeout in 0.98 and other versions where it's not 
possible to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13104) ZooKeeper session timeout cannot be changed for standalone HBase

2015-02-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13104:
---
Fix Version/s: 0.98.11
 Assignee: Alex Araujo

> ZooKeeper session timeout cannot be changed for standalone HBase
> 
>
> Key: HBASE-13104
> URL: https://issues.apache.org/jira/browse/HBASE-13104
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 0.98.10.1
>Reporter: Alex Araujo
>Assignee: Alex Araujo
> Fix For: 0.98.11
>
>
> It's not possible to increase the ZooKeeper session timeout in standalone 
> HBase due to a hardcoded 10s timeout in HMasterCommandLine:
> https://github.com/apache/hbase/blob/0.98/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L176
> In trunk you can append .localHBaseCluster to the ZK session timeout property 
> name to change the timeout:
> https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMasterCommandLine.java#L169-171
> We should allow changing the timeout in 0.98 and other versions where it's 
> not possible to do so.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13086) Show ZK root node on Master WebUI

2015-02-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337321#comment-14337321
 ] 

Andrew Purtell commented on HBASE-13086:


Thanks [~ndimiduk]

> Show ZK root node on Master WebUI
> -
>
> Key: HBASE-13086
> URL: https://issues.apache.org/jira/browse/HBASE-13086
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 13068.jpg, HBASE-13068.00.patch, 
> HBASE-13086-0.98.addendum0.patch
>
>
> Currently we show a well-formed ZK quorum on the master webUI but not the 
> root node. Root node can be changed based on deployment, so we should list it 
> here explicitly. This information is helpful for folks playing around with 
> phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13102) Fix Pseudo-distributed Mode which was broken in 1.0.0

2015-02-25 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337333#comment-14337333
 ] 

Enis Soztutar commented on HBASE-13102:
---

+1. 

> Fix Pseudo-distributed Mode which was broken in 1.0.0
> -
>
> Key: HBASE-13102
> URL: https://issues.apache.org/jira/browse/HBASE-13102
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.1.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.0.1, 1.1.0
>
> Attachments: HBASE-13102.patch
>
>
> {code}
> 2015-02-25 10:42:17,686 FATAL [main] regionserver.RSRpcServices: The hostname 
> of regionserver cannot be set to localhost in a fully-distributed setup 
> because it won't be reachable. See "Getting Started" for more information.
> 2015-02-25 10:42:17,687 ERROR [main] master.HMasterCommandLine: Master exiting
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2051)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>   at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
>   at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2065)
> Caused by: java.io.IOException: The hostname of regionserver cannot be set to 
> localhost in a fully-distributed setup because it won't be reachable. See 
> "Getting Started" for more information.
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.(RSRpcServices.java:793)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.(MasterRpcServices.java:198)
>   at 
> org.apache.hadoop.hbase.master.HMaster.createRpcServices(HMaster.java:486)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:500)
>   at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:337)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2046)
>   ... 5 more
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13091) Split ZK Quorum on Master WebUI

2015-02-25 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337340#comment-14337340
 ] 

Enis Soztutar commented on HBASE-13091:
---

Why don't we put a max limit on the table column in html and let the browser 
deal with the splitting. We cannot do the custom logic for every row. 

> Split ZK Quorum on Master WebUI
> ---
>
> Key: HBASE-13091
> URL: https://issues.apache.org/jira/browse/HBASE-13091
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1, 0.98.10.1
>Reporter: Jean-Marc Spaggiari
>Assignee: Jean-Marc Spaggiari
>Priority: Minor
> Attachments: HBASE-13091-v0-trunk.patch, HBASE-13091-v1-trunk.patch, 
> screenshot.png
>
>
> When using ZK servers or more, on the Master WebUI, this create a very large 
> column and so reduce a lot the others, splitting all the lines and creating 
> tall cells
> Splitting the ZK quorum with one per line will make it nicer and easier to 
> read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2015-02-25 Thread Jonathan Lawlor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Lawlor updated HBASE-11544:

Status: Open  (was: Patch Available)

> [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
> batch even if it means OOME
> --
>
> Key: HBASE-11544
> URL: https://issues.apache.org/jira/browse/HBASE-11544
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Jonathan Lawlor
>Priority: Critical
>  Labels: beginner
> Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, 
> HBASE-11544-v3.patch
>
>
> Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
> large cells.  I kept OOME'ing.
> Serverside, we should measure how much we've accumulated and return to the 
> client whatever we've gathered once we pass out a certain size threshold 
> rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2015-02-25 Thread Jonathan Lawlor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Lawlor updated HBASE-11544:

Attachment: HBASE-11544-v5.patch

New patch to reflect the most recent feedback from ReviewBoard. 

The failures that have been seen with respect the TestAcidGuarantees seem to be 
unrelated and have been called out in HBASE-13097.

One of the more significant changes that this patch introduces is a rework of 
the return type of InternalScanner#next(). Rather than simply return a boolean, 
a state object is now returned. This allows callers of InternalScanner#next() 
determine important state information about the scanner. It also helps us avoid 
unnecessary replication of size calculations.

> [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
> batch even if it means OOME
> --
>
> Key: HBASE-11544
> URL: https://issues.apache.org/jira/browse/HBASE-11544
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Jonathan Lawlor
>Priority: Critical
>  Labels: beginner
> Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, 
> HBASE-11544-v3.patch, HBASE-11544-v5.patch
>
>
> Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
> large cells.  I kept OOME'ing.
> Serverside, we should measure how much we've accumulated and return to the 
> client whatever we've gathered once we pass out a certain size threshold 
> rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2015-02-25 Thread Jonathan Lawlor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Lawlor updated HBASE-11544:

Status: Patch Available  (was: Open)

> [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
> batch even if it means OOME
> --
>
> Key: HBASE-11544
> URL: https://issues.apache.org/jira/browse/HBASE-11544
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Jonathan Lawlor
>Priority: Critical
>  Labels: beginner
> Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, 
> HBASE-11544-v3.patch, HBASE-11544-v5.patch
>
>
> Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
> large cells.  I kept OOME'ing.
> Serverside, we should measure how much we've accumulated and return to the 
> client whatever we've gathered once we pass out a certain size threshold 
> rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2015-02-25 Thread Jonathan Lawlor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Lawlor updated HBASE-11544:

Status: Open  (was: Patch Available)

> [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
> batch even if it means OOME
> --
>
> Key: HBASE-11544
> URL: https://issues.apache.org/jira/browse/HBASE-11544
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Jonathan Lawlor
>Priority: Critical
>  Labels: beginner
> Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, 
> HBASE-11544-v3.patch, HBASE-11544-v5.patch
>
>
> Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
> large cells.  I kept OOME'ing.
> Serverside, we should measure how much we've accumulated and return to the 
> client whatever we've gathered once we pass out a certain size threshold 
> rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2015-02-25 Thread Jonathan Lawlor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Lawlor updated HBASE-11544:

Attachment: HBASE-11544-v4.patch

Whoops, wrong patch posted before... correct one here

> [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
> batch even if it means OOME
> --
>
> Key: HBASE-11544
> URL: https://issues.apache.org/jira/browse/HBASE-11544
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Jonathan Lawlor
>Priority: Critical
>  Labels: beginner
> Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, 
> HBASE-11544-v3.patch, HBASE-11544-v4.patch
>
>
> Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
> large cells.  I kept OOME'ing.
> Serverside, we should measure how much we've accumulated and return to the 
> client whatever we've gathered once we pass out a certain size threshold 
> rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2015-02-25 Thread Jonathan Lawlor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Lawlor updated HBASE-11544:

Attachment: (was: HBASE-11544-v5.patch)

> [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
> batch even if it means OOME
> --
>
> Key: HBASE-11544
> URL: https://issues.apache.org/jira/browse/HBASE-11544
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Jonathan Lawlor
>Priority: Critical
>  Labels: beginner
> Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, 
> HBASE-11544-v3.patch, HBASE-11544-v4.patch
>
>
> Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
> large cells.  I kept OOME'ing.
> Serverside, we should measure how much we've accumulated and return to the 
> client whatever we've gathered once we pass out a certain size threshold 
> rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11544) [Ergonomics] hbase.client.scanner.caching is dogged and will try to return batch even if it means OOME

2015-02-25 Thread Jonathan Lawlor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Lawlor updated HBASE-11544:

Status: Patch Available  (was: Open)

> [Ergonomics] hbase.client.scanner.caching is dogged and will try to return 
> batch even if it means OOME
> --
>
> Key: HBASE-11544
> URL: https://issues.apache.org/jira/browse/HBASE-11544
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Jonathan Lawlor
>Priority: Critical
>  Labels: beginner
> Attachments: HBASE-11544-v1.patch, HBASE-11544-v2.patch, 
> HBASE-11544-v3.patch, HBASE-11544-v4.patch
>
>
> Running some tests, I set hbase.client.scanner.caching=1000.  Dataset has 
> large cells.  I kept OOME'ing.
> Serverside, we should measure how much we've accumulated and return to the 
> client whatever we've gathered once we pass out a certain size threshold 
> rather than keep accumulating till we OOME.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13101) RPC throttling to protect against malicious clients

2015-02-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337399#comment-14337399
 ] 

Andrew Purtell commented on HBASE-13101:


Yes, we could start with a backport of HBASE-11598 

> RPC throttling to protect against malicious clients
> ---
>
> Key: HBASE-13101
> URL: https://issues.apache.org/jira/browse/HBASE-13101
> Project: HBase
>  Issue Type: Brainstorming
>  Components: regionserver
>Reporter: Nick Dimiduk
>
> We should protect a region server from poorly designed/implemented 
> clients/schemas that result in a "hotspot" which overwhelms a single machine. 
> A client that creates a new connection for each request is an example of this 
> case, where META gets completely flooded and kills the RS. Master diligently 
> brings it up on another host, which sends the traffic along to the next 
> victim, and will slowly bring down the whole cluster.
> My suggestion is rate-limiting per client, implemented at the RPC level, but 
> I'm looking for other suggestions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13098) HBase Connection Control

2015-02-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13098:
---
Fix Version/s: (was: 0.98.11)
   (was: 1.1.0)
   (was: 2.0.0)
Affects Version/s: (was: 0.98.10)
   Status: Open  (was: Patch Available)

> HBase Connection Control
> 
>
> Key: HBASE-13098
> URL: https://issues.apache.org/jira/browse/HBASE-13098
> Project: HBase
>  Issue Type: New Feature
>  Components: security
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Attachments: HBASE-13098.patch, HBase Connection Control.pdf
>
>
> It is desirable to set the limit on the number of client connections 
> permitted to the HBase server by controlling with certain system 
> variables/parameters. Too many connections to the HBase server imply too many 
> queries and MR jobs running on HBase. This can slow down the performance of 
> the system and lead to denial of service. Hence such connections need to be 
> controlled. Using too many connections may just cause thrashing rather than 
> get more useful work done.
> This is kind off inspired from 
> http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/#.VO2JXXyUe9y



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13098) HBase Connection Control

2015-02-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337409#comment-14337409
 ] 

Andrew Purtell commented on HBASE-13098:


We already have a hierarchy of RPC connection controllers descending from the 
{{RpcController}} interface, pluggable via {{RpcControllerFactories}}, and in 
use by client apps such as Apache Phoenix. Can this be implemented within that 
framework? I skimmed the patch and the {{ConnectionControl}} concept seems 
similar in some respects (controlling RPC) but more limited in others (can only 
accept or reject connections). 

> HBase Connection Control
> 
>
> Key: HBASE-13098
> URL: https://issues.apache.org/jira/browse/HBASE-13098
> Project: HBase
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.98.10
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 1.1.0, 0.98.11
>
> Attachments: HBASE-13098.patch, HBase Connection Control.pdf
>
>
> It is desirable to set the limit on the number of client connections 
> permitted to the HBase server by controlling with certain system 
> variables/parameters. Too many connections to the HBase server imply too many 
> queries and MR jobs running on HBase. This can slow down the performance of 
> the system and lead to denial of service. Hence such connections need to be 
> controlled. Using too many connections may just cause thrashing rather than 
> get more useful work done.
> This is kind off inspired from 
> http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/#.VO2JXXyUe9y



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13086) Show ZK root node on Master WebUI

2015-02-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14337410#comment-14337410
 ] 

Hudson commented on HBASE-13086:


SUCCESS: Integrated in HBase-0.98 #872 (See 
[https://builds.apache.org/job/HBase-0.98/872/])
HBASE-13086 Show ZK root node on Master WebUI (addendum) (ndimiduk: rev 
58c1c7434f22b5a2a923de1d6504df6c061885ee)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestMasterStatusServlet.java


> Show ZK root node on Master WebUI
> -
>
> Key: HBASE-13086
> URL: https://issues.apache.org/jira/browse/HBASE-13086
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Minor
> Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11
>
> Attachments: 13068.jpg, HBASE-13068.00.patch, 
> HBASE-13086-0.98.addendum0.patch
>
>
> Currently we show a well-formed ZK quorum on the master webUI but not the 
> root node. Root node can be changed based on deployment, so we should list it 
> here explicitly. This information is helpful for folks playing around with 
> phoenix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13100) Shell command to retrieve table splits

2015-02-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13100:
---
Fix Version/s: 2.0.0

> Shell command to retrieve table splits
> --
>
> Key: HBASE-13100
> URL: https://issues.apache.org/jira/browse/HBASE-13100
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Sean Busbey
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0, 1.1.0
>
>
> Add a shell command that returns the splits for a table.
> Doing this yourself is currently possible, but involves going outside of the 
> public api.
> {code}
> jruby-1.7.3 :012 > create 'example_table', 'f1', SPLITS => ["10", "20", "30", 
> "40"]
> 0 row(s) in 0.5500 seconds
>  => Hbase::Table - example_table 
> jruby-1.7.3 :013 > 
> get_table('example_table').table.get_all_region_locations.map do |location| 
> org.apache.hadoop.hbase.util.Bytes::toStringBinary(location.get_region_info.get_start_key)
>  end
> 0 row(s) in 0.0130 seconds
>  => ["", "10", "20", "30", "40"] 
> jruby-1.7.3 :014 > 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >