[jira] [Created] (HBASE-19592) Add UTs to test retry on update zk failure

2017-12-21 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-19592:
-

 Summary: Add UTs to test retry on update zk failure
 Key: HBASE-19592
 URL: https://issues.apache.org/jira/browse/HBASE-19592
 Project: HBase
  Issue Type: Sub-task
  Components: proc-v2, Replication
Reporter: Duo Zhang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-15124) Document the new 'normalization' feature in refguid

2017-12-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-15124.
---
   Resolution: Fixed
Fix Version/s: (was: 1.4.1)
   (was: 1.3.2)
   (was: 2.0.0)
   3.0.0

Pushed this out to master branch. I can't make site build locally. Will check 
it when deployed at h.a.o.

> Document the new 'normalization' feature in refguid
> ---
>
> Key: HBASE-15124
> URL: https://issues.apache.org/jira/browse/HBASE-15124
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Affects Versions: 1.3.0
>Reporter: stack
>Assignee: Romil Choksi
>Priority: Critical
> Fix For: 3.0.0
>
> Attachments: HBASE-15124.master.001.patch
>
>
> A nice new feature is coming in to 1.2.0, normalization. A small bit of doc 
> on it in refguide would help.
> Should define what normalization is.
> Should say a sentence or two on how it works and when.
> Throw in the output of shell commands.
> A paragraph or so. I can help.
> Marking critical against 1.2.0. Not a blocker.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19591) Cleanup the usage of ReplicationAdmin from hbase-shell

2017-12-21 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-19591:
--

 Summary: Cleanup the usage of ReplicationAdmin from hbase-shell
 Key: HBASE-19591
 URL: https://issues.apache.org/jira/browse/HBASE-19591
 Project: HBase
  Issue Type: Sub-task
Reporter: Guanghao Zhang
Assignee: Guanghao Zhang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (HBASE-19588) Additional jar dependencies needed for PerformanceEvaluation?

2017-12-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reopened HBASE-19588:
---

Good point [~chu11] Reopened Sir.

> Additional jar dependencies needed for PerformanceEvaluation?
> -
>
> Key: HBASE-19588
> URL: https://issues.apache.org/jira/browse/HBASE-19588
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.4.0
>Reporter: Albert Chu
>Priority: Minor
>
> I have a unit test that runs a simple PerformanceEvaluation test to make sure 
> things are basically working
> {noformat}
> bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=5 
> sequentialWrite 1
> {noformat}
> This test runs against Hadoop 2.7.0 and works against all past versions 
> 0.99.0 and up.  It broke with 1.4.0 with the following error.
> {noformat}
> 2017-12-21 13:49:40,974 INFO  [main] mapreduce.Job: Task Id : 
> attempt_1513892752187_0002_m_04_2, Status : FAILED
> Error: java.io.IOException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:297)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:250)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
>   ... 12 more
> Caused by: java.lang.RuntimeException: Could not create  interface 
> org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource Is the hadoop 
> compatibility jar on the classpath?
>   at 
> org.apache.hadoop.hbase.CompatibilitySingletonFactory.getInstance(CompatibilitySingletonFactory.java:75)
>   at 
> org.apache.hadoop.hbase.zookeeper.MetricsZooKeeper.(MetricsZooKeeper.java:38)
>   at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.(RecoverableZooKeeper.java:130)
>   at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:143)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:181)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:155)
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.(ZooKeeperKeepAliveConnection.java:43)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1737)
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:104)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:945)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:721)
>   ... 17 more
> Caused by: java.util.ServiceConfigurationError: 
> org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource: Provider 
> org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSourceImpl could not be 
> instantiated
>   at java.util.ServiceLoader.fail(ServiceLoader.java:224)
>   at java.util.ServiceLoader.access$100(ServiceLoader.java:181)
>   at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377)
>   at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
>   at 
> org.apache.hadoop.hbase.CompatibilitySingletonFactory.getInstance(CompatibilitySingletonFactory.java:59)
>   ... 27 more
> Caused by: java.lang.NoClassDefFoundError: 
> Lorg/apache/hado

[jira] [Resolved] (HBASE-17248) SimpleRegionNormalizer javadoc correction

2017-12-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-17248.
---
   Resolution: Fixed
 Assignee: Daisuke Kobayashi
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0-beta-1

Pushed to branch-2 and master. Thanks for the cleanup [~daisuke.kobayashi]

> SimpleRegionNormalizer javadoc correction
> -
>
> Key: HBASE-17248
> URL: https://issues.apache.org/jira/browse/HBASE-17248
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Daisuke Kobayashi
>Assignee: Daisuke Kobayashi
>Priority: Trivial
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17248.patch
>
>
> SimpleRegionNormalizer has been revised since it was implemented first. The 
> core behavior is also changed especially per HBASE-15065 
> (SimpleRegionNormalizer should return multiple normalization plans in one 
> run). Current javadoc still says normalizer does just one split or merge per 
> a plan, though. My small patch corrects this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [ANNOUNCE] Please welcome new HBase committer YI Liang

2017-12-21 Thread Misty Stanley-Jones
Thank you for your continuing contributions!

On Dec 20, 2017 4:06 PM, "Jerry He"  wrote:

> On behalf of the Apache HBase PMC, I am pleased to announce that
> Yi Liang has accepted the PMC's invitation to become a committer
> on the project.
>
> We appreciate all of Yi's great work thus far and look forward to
> his continued involvement.
>
> Please join me in congratulating Yi!
>
> --
> Thanks,
> Jerry
>


[jira] [Resolved] (HBASE-10092) Move to slf4j

2017-12-21 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy resolved HBASE-10092.
--
Resolution: Fixed

> Move to slf4j
> -
>
> Key: HBASE-10092
> URL: https://issues.apache.org/jira/browse/HBASE-10092
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Balazs Meszaros
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 10092.txt, 10092v2.txt, 
> HBASE-10092-addendum-2.master.002.patch, HBASE-10092-addendum.patch, 
> HBASE-10092-preview-v0.patch, HBASE-10092.master.001.patch, 
> HBASE-10092.master.002.patch, HBASE-10092.master.003.patch, 
> HBASE-10092.master.004.patch, HBASE-10092.patch
>
>
> Allows logging with less friction.  See http://logging.apache.org/log4j/2.x/  
> This rather radical transition can be done w/ minor change given they have an 
> adapter for apache's logging, the one we use.  They also have and adapter for 
> slf4j so we likely can remove at least some of the 4 versions of this module 
> our dependencies make use of.
> I made a start in attached patch but am currently stuck in maven dependency 
> resolve hell courtesy of our slf4j.  Fixing will take some concentration and 
> a good net connection, an item I currently lack.  Other TODOs are that will 
> need to fix our little log level setting jsp page -- will likely have to undo 
> our use of hadoop's tool here -- and the config system changes a little.
> I will return to this project soon.  Will bring numbers.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (HBASE-10092) Move to slf4j

2017-12-21 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy reopened HBASE-10092:
--

> Move to slf4j
> -
>
> Key: HBASE-10092
> URL: https://issues.apache.org/jira/browse/HBASE-10092
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Balazs Meszaros
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 10092.txt, 10092v2.txt, HBASE-10092-addendum.patch, 
> HBASE-10092-preview-v0.patch, HBASE-10092.master.001.patch, 
> HBASE-10092.master.002.patch, HBASE-10092.master.003.patch, 
> HBASE-10092.master.004.patch, HBASE-10092.patch
>
>
> Allows logging with less friction.  See http://logging.apache.org/log4j/2.x/  
> This rather radical transition can be done w/ minor change given they have an 
> adapter for apache's logging, the one we use.  They also have and adapter for 
> slf4j so we likely can remove at least some of the 4 versions of this module 
> our dependencies make use of.
> I made a start in attached patch but am currently stuck in maven dependency 
> resolve hell courtesy of our slf4j.  Fixing will take some concentration and 
> a good net connection, an item I currently lack.  Other TODOs are that will 
> need to fix our little log level setting jsp page -- will likely have to undo 
> our use of hadoop's tool here -- and the config system changes a little.
> I will return to this project soon.  Will bring numbers.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19590) Remove the duplicate code in deprecated ReplicationAdmin

2017-12-21 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-19590:
--

 Summary: Remove the duplicate code in deprecated ReplicationAdmin
 Key: HBASE-19590
 URL: https://issues.apache.org/jira/browse/HBASE-19590
 Project: HBase
  Issue Type: Improvement
Reporter: Guanghao Zhang
Assignee: Guanghao Zhang
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19589) New regions should always be added with state CLOSED

2017-12-21 Thread Appy (JIRA)
Appy created HBASE-19589:


 Summary: New regions should always be added with state CLOSED
 Key: HBASE-19589
 URL: https://issues.apache.org/jira/browse/HBASE-19589
 Project: HBase
  Issue Type: Bug
Reporter: Appy
Assignee: Appy


Followup of HBASE-19530.
Looks like it missed a code path.
Looked deeper into MetaTableAccessor and usages of some of its functions to 
correctly set initial state as CLOSED.
Other things
- Removed unused functions.
- Changed logging to parameterized



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-19588) Additional jar dependencies needed for PerformanceEvaluation?

2017-12-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-19588.
---
Resolution: Invalid

> Additional jar dependencies needed for PerformanceEvaluation?
> -
>
> Key: HBASE-19588
> URL: https://issues.apache.org/jira/browse/HBASE-19588
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 1.4.0
>Reporter: Albert Chu
>Priority: Minor
>
> I have a unit test that runs a simple PerformanceEvaluation test to make sure 
> things are basically working
> {noformat}
> bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=5 
> sequentialWrite 1
> {noformat}
> This test runs against Hadoop 2.7.0 and works against all past versions 
> 0.99.0 and up.  It broke with 1.4.0 with the following error.
> {noformat}
> 2017-12-21 13:49:40,974 INFO  [main] mapreduce.Job: Task Id : 
> attempt_1513892752187_0002_m_04_2, Status : FAILED
> Error: java.io.IOException: java.lang.reflect.InvocationTargetException
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:297)
>   at 
> org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:250)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.lang.reflect.InvocationTargetException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
>   at 
> org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
>   ... 12 more
> Caused by: java.lang.RuntimeException: Could not create  interface 
> org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource Is the hadoop 
> compatibility jar on the classpath?
>   at 
> org.apache.hadoop.hbase.CompatibilitySingletonFactory.getInstance(CompatibilitySingletonFactory.java:75)
>   at 
> org.apache.hadoop.hbase.zookeeper.MetricsZooKeeper.(MetricsZooKeeper.java:38)
>   at 
> org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.(RecoverableZooKeeper.java:130)
>   at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:143)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:181)
>   at 
> org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:155)
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.(ZooKeeperKeepAliveConnection.java:43)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1737)
>   at 
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:104)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:945)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:721)
>   ... 17 more
> Caused by: java.util.ServiceConfigurationError: 
> org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource: Provider 
> org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSourceImpl could not be 
> instantiated
>   at java.util.ServiceLoader.fail(ServiceLoader.java:224)
>   at java.util.ServiceLoader.access$100(ServiceLoader.java:181)
>   at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377)
>   at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
>   at 
> org.apache.hadoop.hbase.CompatibilitySingletonFactory.getInstance(CompatibilitySingletonFactory.java:59)
>   ... 27 more
> Caused by: java.lang.NoClassDefFoundError: 
> Lorg/apache/hadoop/hbase/me

[jira] [Created] (HBASE-19588) Additional jar dependencies needed for PerformanceEvaluation?

2017-12-21 Thread Albert Chu (JIRA)
Albert Chu created HBASE-19588:
--

 Summary: Additional jar dependencies needed for 
PerformanceEvaluation?
 Key: HBASE-19588
 URL: https://issues.apache.org/jira/browse/HBASE-19588
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 1.4.0
Reporter: Albert Chu
Priority: Minor


I have a unit test that runs a simple PerformanceEvaluation test to make sure 
things are basically working

{noformat}
bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation --rows=5 
sequentialWrite 1
{noformat}

This test runs against Hadoop 2.7.0 and works against all past versions 0.99.0 
and up.  It broke with 1.4.0 with the following error.

{noformat}
2017-12-21 13:49:40,974 INFO  [main] mapreduce.Job: Task Id : 
attempt_1513892752187_0002_m_04_2, Status : FAILED
Error: java.io.IOException: java.lang.reflect.InvocationTargetException
at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:240)
at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218)
at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119)
at 
org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:297)
at 
org.apache.hadoop.hbase.PerformanceEvaluation$EvaluationMapTask.map(PerformanceEvaluation.java:250)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
... 12 more
Caused by: java.lang.RuntimeException: Could not create  interface 
org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource Is the hadoop 
compatibility jar on the classpath?
at 
org.apache.hadoop.hbase.CompatibilitySingletonFactory.getInstance(CompatibilitySingletonFactory.java:75)
at 
org.apache.hadoop.hbase.zookeeper.MetricsZooKeeper.(MetricsZooKeeper.java:38)
at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.(RecoverableZooKeeper.java:130)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.connect(ZKUtil.java:143)
at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:181)
at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.(ZooKeeperWatcher.java:155)
at 
org.apache.hadoop.hbase.client.ZooKeeperKeepAliveConnection.(ZooKeeperKeepAliveConnection.java:43)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveZooKeeperWatcher(ConnectionManager.java:1737)
at 
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:104)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:945)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:721)
... 17 more
Caused by: java.util.ServiceConfigurationError: 
org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSource: Provider 
org.apache.hadoop.hbase.zookeeper.MetricsZooKeeperSourceImpl could not be 
instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:224)
at java.util.ServiceLoader.access$100(ServiceLoader.java:181)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377)
at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
at 
org.apache.hadoop.hbase.CompatibilitySingletonFactory.getInstance(CompatibilitySingletonFactory.java:59)
... 27 more
Caused by: java.lang.NoClassDefFoundError: 
Lorg/apache/hadoop/hbase/metrics/MetricRegistry;
at java.lang.Class.getDeclaredFields0(Native Method)
at java.lang.Class.privateGetDeclaredFields(Class.java:2509)
at java.lang.Class.getDeclaredFields(Class.java:1819)
at 
org.apache.hadoop.util.ReflectionUtils.getDeclared

[jira] [Resolved] (HBASE-19585) [WAL] "Unhandled: Bad type on operand stack" PBHelperClient.convert failing on HdfsProtos$ContentSummaryProto

2017-12-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-19585.
---
Resolution: Not A Problem

Resolving as not a problem... 

> [WAL] "Unhandled: Bad type on operand stack" PBHelperClient.convert failing 
> on HdfsProtos$ContentSummaryProto
> -
>
> Key: HBASE-19585
> URL: https://issues.apache.org/jira/browse/HBASE-19585
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>
> Testing, RS crashes soon after startup with below cryptic mess. This is the 
> branch-2 started over a 0.98 data. [~Apache9] You have a clue sir?
> {code}
> 595126 2017-12-21 13:09:38,058 INFO  
> [regionserver/ve0528.halxg.cloudera.com/10.17.240.22:16020] 
> wal.AbstractFSWAL: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, 
> prefi   x=ve0528.halxg.cloudera.com%2C16020%2C1513890565537, suffix=, 
> logDir=hdfs://ve0524.halxg.cloudera.com:8020/hbase/WALs/ve0528.halxg.cloudera.com,16020,1513890565537,
>  archiv   eDir=hdfs://ve0524.halxg.cloudera.com:8020/hbase/oldWALs
> 595127 2017-12-21 13:09:38,107 ERROR 
> [regionserver/ve0528.halxg.cloudera.com/10.17.240.22:16020] 
> regionserver.HRegionServer: * ABORTING region server 
> ve0528.halxg.cloudera.co   m,16020,1513890565537: Unhandled: Bad type on 
> operand stack
> 595128 Exception Details:
> 595129   Location:
> 595130 
> org/apache/hadoop/hdfs/protocolPB/PBHelperClient.convert(Lorg/apache/hadoop/hdfs/protocol/proto/HdfsProtos$ContentSummaryProto;)Lorg/apache/hadoop/fs/ContentSummary;
>  @   98: invokestatic
> 595131   Reason:
> 595132 Type 'org/apache/hadoop/fs/ContentSummary$Builder' (current frame, 
> stack[1]) is not assignable to 'org/apache/hadoop/fs/QuotaUsage$Builder'
> 595133   Current Frame:
> 595134 bci: @98
> 595135 flags: { }
> 595136 locals: { 
> 'org/apache/hadoop/hdfs/protocol/proto/HdfsProtos$ContentSummaryProto', 
> 'org/apache/hadoop/fs/ContentSummary$Builder' }
> 595137 stack: { 
> 'org/apache/hadoop/hdfs/protocol/proto/HdfsProtos$StorageTypeQuotaInfosProto',
>  'org/apache/hadoop/fs/ContentSummary$Builder' }
> 595138   Bytecode:
> 595139 0x000: 2ac7 0005 01b0 bb03 3159 b703 324c 2b2a
> 595140 0x010: b603 33b6 0334 2ab6 0335 b603 362a b603
> 595141 0x020: 37b6 0338 2ab6 0339 b603 3a2a b603 3bb6
> 595142 0x030: 033c 2ab6 033d b603 3e2a b603 3fb6 0340
> 595143 0x040: 2ab6 0341 b603 422a b603 43b6 0344 2ab6
> 595144 0x050: 0345 b603 4657 2ab6 0347 9900 0b2a b603
> 595145 0x060: 482b b803 492b b603 4ab0
> 595146   Stackmap Table:
> 595147 same_frame(@6)
> 595148 append_frame(@101,Object[#2126])
> 595149  *
> {code}
> 2.8.2 hadoop.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19587) [sfl4j] " Class path contains multiple SLF4J binding" complaint

2017-12-21 Thread stack (JIRA)
stack created HBASE-19587:
-

 Summary: [sfl4j] " Class path contains multiple SLF4J binding" 
complaint
 Key: HBASE-19587
 URL: https://issues.apache.org/jira/browse/HBASE-19587
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 2.0.0
Reporter: stack


I get the below staring a cluster in distribute mode.

$ ./hbase/bin/start-hbase.sh --config ~/conf_hbase
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/home/stack/hbase-2.0.0-beta-1-SNAPSHOT/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/home/stack/hadoop-2.8.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/home/stack/hbase-2.0.0-beta-1-SNAPSHOT/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/home/stack/hadoop-2.8.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]


I can make it go away if I do the following in hbase-env.sh:

export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP=true

...which disables including hadoop jars on our classpath, a facility that came 
in years ago w/ this commit.

tree b7e93ae4c62f22e5d4ce221cafaaee293c7b67ce
parent d1619bceb3142f3ab8c134365e18a150fbd5b9bf
author Enis Soztutar  Fri Feb 27 16:27:40 2015 -0800
committer Enis Soztutar  Fri Feb 27 16:27:40 2015 -0800

HBASE-13120 Allow disabling hadoop classpath and native library lookup 
(Siddharth Wagle)

Adding hadoop to our CLASSPATH seems to be around for years:

tree eb5ee09ac9894d264f1a2d1653f5f5eb6684f2fb
parent d2fb2d5e2494834947799c5f4fbd72955e7fdba1
author Michael Stack  Sat Mar 3 16:47:55 2012 +
committer Michael Stack  Sat Mar 3 16:47:55 2012 +

HBASE-5286 bin/hbase's logic of adding Hadoop jar files to the classpath is 
fragile when presented with split packaged Hadoop 0.23 installation





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19586) Figure how to enable compression by default (fallbacks if native is missing, etc.)

2017-12-21 Thread stack (JIRA)
stack created HBASE-19586:
-

 Summary: Figure how to enable compression by default (fallbacks if 
native is missing, etc.)
 Key: HBASE-19586
 URL: https://issues.apache.org/jira/browse/HBASE-19586
 Project: HBase
  Issue Type: Sub-task
Reporter: stack
 Fix For: 2.0.0-beta-2


See parent issue where the benefits of enabling compression are brought up 
(again!). Figure how we can make it work out of the box rather than expect user 
set it up. Parking this issue to look at it before we release 2.0.0.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19585) [WAL] "Unhandled: Bad type on operand stack" PBHelperClient.convert failing on HdfsProtos$ContentSummaryProto

2017-12-21 Thread stack (JIRA)
stack created HBASE-19585:
-

 Summary: [WAL] "Unhandled: Bad type on operand stack" 
PBHelperClient.convert failing on HdfsProtos$ContentSummaryProto
 Key: HBASE-19585
 URL: https://issues.apache.org/jira/browse/HBASE-19585
 Project: HBase
  Issue Type: Bug
Reporter: stack


Testing, RS crashes soon after startup with below cryptic mess. This is the 
branch-2 started over a 0.98 data. [~Apache9] You have a clue sir?

{code}
595126 2017-12-21 13:09:38,058 INFO  
[regionserver/ve0528.halxg.cloudera.com/10.17.240.22:16020] wal.AbstractFSWAL: 
WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefi   
x=ve0528.halxg.cloudera.com%2C16020%2C1513890565537, suffix=, 
logDir=hdfs://ve0524.halxg.cloudera.com:8020/hbase/WALs/ve0528.halxg.cloudera.com,16020,1513890565537,
 archiv   eDir=hdfs://ve0524.halxg.cloudera.com:8020/hbase/oldWALs
595127 2017-12-21 13:09:38,107 ERROR 
[regionserver/ve0528.halxg.cloudera.com/10.17.240.22:16020] 
regionserver.HRegionServer: * ABORTING region server 
ve0528.halxg.cloudera.co   m,16020,1513890565537: Unhandled: Bad type on 
operand stack
595128 Exception Details:
595129   Location:
595130 
org/apache/hadoop/hdfs/protocolPB/PBHelperClient.convert(Lorg/apache/hadoop/hdfs/protocol/proto/HdfsProtos$ContentSummaryProto;)Lorg/apache/hadoop/fs/ContentSummary;
 @   98: invokestatic
595131   Reason:
595132 Type 'org/apache/hadoop/fs/ContentSummary$Builder' (current frame, 
stack[1]) is not assignable to 'org/apache/hadoop/fs/QuotaUsage$Builder'
595133   Current Frame:
595134 bci: @98
595135 flags: { }
595136 locals: { 
'org/apache/hadoop/hdfs/protocol/proto/HdfsProtos$ContentSummaryProto', 
'org/apache/hadoop/fs/ContentSummary$Builder' }
595137 stack: { 
'org/apache/hadoop/hdfs/protocol/proto/HdfsProtos$StorageTypeQuotaInfosProto', 
'org/apache/hadoop/fs/ContentSummary$Builder' }
595138   Bytecode:
595139 0x000: 2ac7 0005 01b0 bb03 3159 b703 324c 2b2a
595140 0x010: b603 33b6 0334 2ab6 0335 b603 362a b603
595141 0x020: 37b6 0338 2ab6 0339 b603 3a2a b603 3bb6
595142 0x030: 033c 2ab6 033d b603 3e2a b603 3fb6 0340
595143 0x040: 2ab6 0341 b603 422a b603 43b6 0344 2ab6
595144 0x050: 0345 b603 4657 2ab6 0347 9900 0b2a b603
595145 0x060: 482b b803 492b b603 4ab0
595146   Stackmap Table:
595147 same_frame(@6)
595148 append_frame(@101,Object[#2126])
595149  *
{code}

2.8.2 hadoop.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [VOTE] hbase-thirdparty 2.0.0 RC

2017-12-21 Thread Mike Drob
Reminder to folks that this RC still needs another binding PMC vote on it.

I have filed jira issues HBASE-19560 and HBASE-19584 to address the
concerns raised so far.

Thanks,
Mike


On Wed, Dec 20, 2017 at 12:13 PM, Josh Elser  wrote:

> +1
>
> * L&N not entirely accurate, IMO. They state that things are included in
> the src release which are not. I think it would be more appropriate to push
> the relevant information down into src/main/apppended-resources for each
> module (e.g. hbase-shaded-protobuf would have 
> src/main/appended-resources/{LICENSE,NOTICE})
> which have the relevant L&N content for the products being bundled. Thus,
> we'd have nothing in the 3rdparty L&N which reflects the src release.
> * Ran into the contents of the src archive being dumped into the CWD
> despite seeing Stack's note about it. (mvn assembly:single seems to do it
> right though)
> * xsums/sigs OK
>
>
> On 12/19/17 4:03 PM, Mike Drob wrote:
>
>> HBase Devs,
>>
>> In preparation for our hbase-2.0.0-beta releases, it would be beneficial
>> to
>> have updated third-party artifacts.
>>
>> These artifacts update the version of netty to 4.1.17 (from 4.1.12) and
>> change the relocation offset to o.a.h.thirdparty to prevent conflicts with
>> our other shaded artifacts (relocated during the build).
>>
>> This artifact was tested locally against current hbase master branch
>> with HBASE-19552 applied.
>>
>> Source artifact, signatures, and checksums are available at
>> https://dist.apache.org/repos/dist/dev/hbase/hbase-thirdparty/2.0.0RC0/
>>
>> Signed git commit for the release candidate available at
>> https://git-wip-us.apache.org/repos/asf?p=hbase-thirdparty.g
>> it;a=commit;h=
>> 2b55dc792196a12d9a365a758a518f26c459391e
>>
>> Maven repository available at
>> https://repository.apache.org/content/repositories/orgapachehbase-1187
>>
>> This vote will remain open for at least 72 hours. Please review the
>> artifacts and cast your votes!
>>
>> Here's my +1 (non-binding)
>>
>> Thanks,
>> Mike
>>
>>


Re: [ANNOUNCE] Please welcome new HBase committer YI Liang

2017-12-21 Thread Andrew Purtell
Congratulations and welcome, Yi.


On Wed, Dec 20, 2017 at 4:06 PM, Jerry He  wrote:

> On behalf of the Apache HBase PMC, I am pleased to announce that
> Yi Liang has accepted the PMC's invitation to become a committer
> on the project.
>
> We appreciate all of Yi's great work thus far and look forward to
> his continued involvement.
>
> Please join me in congratulating Yi!
>
> --
> Thanks,
> Jerry
>



-- 
Best regards,
Andrew

Words like orphans lost among the crosstalk, meaning torn from truth's
decrepit hands
   - A23, Crosstalk


Re: [DISCUSSION] Default configurations in hbase-2.0.0 hbase-default.xml

2017-12-21 Thread Andrew Purtell
My point being: if it is too slow for many use cases (like bzip2 and
zip/deflate) and requires a native library, then we can't enable it by
default. If we can find a fast LZ variant that has a pure java fallback, we
could -- maybe, assuming it does produce the expected benefits without
hurting performance.



On Thu, Dec 21, 2017 at 12:32 PM, Andrew Purtell 
wrote:

> No, that's not good either. I mean something like a fast LZ variant.
>
>
> On Thu, Dec 21, 2017 at 12:10 AM, C Reid  wrote:
>
>> .gz (GzipCodec) or .deflate (DeflateCodec).
>>
>> 
>> From: saint@gmail.com  on behalf of Stack <
>> st...@duboce.net>
>> Sent: 21 December 2017 15:01:21
>> To: HBase Dev List
>> Subject: Re: [DISCUSSION] Default configurations in hbase-2.0.0
>> hbase-default.xml
>>
>> On Tue, Dec 19, 2017 at 8:18 PM, Andrew Purtell > >
>> wrote:
>>
>> > Is there an option with a pure Java fallback if the native codec isn't
>> > available? I mean something reasonable, not bzip2.
>> >
>> >
>> >
>> Yeah, what Andrew says...
>> S
>>
>>
>>
>>
>> > > On Dec 19, 2017, at 6:16 PM, Dave Latham  wrote:
>> > >
>> > > What about LZ4 instead?  Most benchmarks I've seen show it ahead of
>> > Snappy.
>> > >
>> > >> On Tue, Dec 19, 2017 at 5:55 PM, Mike Drob  wrote:
>> > >>
>> > >> Can you file a JIRA for some kind of magical default
>> > snappy-if-available?
>> > >>
>> > >>> On Tue, Dec 19, 2017 at 7:38 PM, Stack  wrote:
>> > >>>
>> >  On Tue, Dec 19, 2017 at 4:22 PM, Stack  wrote:
>> > 
>> >  Thanks for jumping in JMS. Ok on the by-table.
>> > 
>> >  SNAPPY license seems fine. We'd enable it as default when you
>> create a
>> >  table? Let me play w/ it.
>> > 
>> > 
>> > >>> Oh. I forgot what happens if the native lib is not available, how
>> > cluster
>> > >>> goes down.
>> > >>>
>> > >>> Caused by: java.lang.RuntimeException: native snappy library not
>> > >> available:
>> > >>> this version of libhadoop was built without snappy support.
>> > >>>
>> > >>> I think we should skip out on enabling this (but recommend folks run
>> > this
>> > >>> way...)
>> > >>>
>> > >>> Thanks JMS,
>> > >>> S
>> > >>>
>> > >>>
>> > >>>
>> >  Anything else from your experience that we should change JMS?
>> > 
>> >  Thanks sir,
>> >  S
>> > 
>> > 
>> >  On Tue, Dec 19, 2017 at 1:47 PM, Jean-Marc Spaggiari <
>> >  jean-m...@spaggiari.org> wrote:
>> > 
>> > > Can we get all tables by default Snappy compressed? I think
>> because
>> > of
>> > >>> the
>> > > license we can not, right? Just asking, in case there is an option
>> > for
>> > > that... Also +1 on balancing by table...
>> > >
>> > > 2017-12-18 17:34 GMT-05:00 Stack :
>> > >
>> > >> (I thought I'd already posted a DISCUSSION on defaults for 2.0.0
>> but
>> > > can't
>> > >> find it...)
>> > >>
>> > >> Dear All:
>> > >>
>> > >> I'm trying to get some eyeballs/thoughts on changes you'd like
>> seen
>> > >> in
>> > >> hbase defaults for hbase-2.0.0. We have a an ISSUE and some good
>> > > discussion
>> > >> already up at HBASE-19148.
>> > >>
>> > >> A good case is being made for enabling balancing by table as
>> > >> default.
>> > >>
>> > >> Guanghao Zhang has already put in place more sensible
>> retry/timeout
>> > >> numbers.
>> > >>
>> > >> Anything else we should change? Shout here or up on the issue.
>> > >>
>> > >> Thanks,
>> > >> S
>> > >>
>> > >
>> > 
>> > 
>> > >>>
>> > >>
>> >
>>
>
>
>
> --
> Best regards,
> Andrew
>
> Words like orphans lost among the crosstalk, meaning torn from truth's
> decrepit hands
>- A23, Crosstalk
>



-- 
Best regards,
Andrew

Words like orphans lost among the crosstalk, meaning torn from truth's
decrepit hands
   - A23, Crosstalk


Re: [DISCUSSION] Default configurations in hbase-2.0.0 hbase-default.xml

2017-12-21 Thread Andrew Purtell
No, that's not good either. I mean something like a fast LZ variant.


On Thu, Dec 21, 2017 at 12:10 AM, C Reid  wrote:

> .gz (GzipCodec) or .deflate (DeflateCodec).
>
> 
> From: saint@gmail.com  on behalf of Stack <
> st...@duboce.net>
> Sent: 21 December 2017 15:01:21
> To: HBase Dev List
> Subject: Re: [DISCUSSION] Default configurations in hbase-2.0.0
> hbase-default.xml
>
> On Tue, Dec 19, 2017 at 8:18 PM, Andrew Purtell 
> wrote:
>
> > Is there an option with a pure Java fallback if the native codec isn't
> > available? I mean something reasonable, not bzip2.
> >
> >
> >
> Yeah, what Andrew says...
> S
>
>
>
>
> > > On Dec 19, 2017, at 6:16 PM, Dave Latham  wrote:
> > >
> > > What about LZ4 instead?  Most benchmarks I've seen show it ahead of
> > Snappy.
> > >
> > >> On Tue, Dec 19, 2017 at 5:55 PM, Mike Drob  wrote:
> > >>
> > >> Can you file a JIRA for some kind of magical default
> > snappy-if-available?
> > >>
> > >>> On Tue, Dec 19, 2017 at 7:38 PM, Stack  wrote:
> > >>>
> >  On Tue, Dec 19, 2017 at 4:22 PM, Stack  wrote:
> > 
> >  Thanks for jumping in JMS. Ok on the by-table.
> > 
> >  SNAPPY license seems fine. We'd enable it as default when you
> create a
> >  table? Let me play w/ it.
> > 
> > 
> > >>> Oh. I forgot what happens if the native lib is not available, how
> > cluster
> > >>> goes down.
> > >>>
> > >>> Caused by: java.lang.RuntimeException: native snappy library not
> > >> available:
> > >>> this version of libhadoop was built without snappy support.
> > >>>
> > >>> I think we should skip out on enabling this (but recommend folks run
> > this
> > >>> way...)
> > >>>
> > >>> Thanks JMS,
> > >>> S
> > >>>
> > >>>
> > >>>
> >  Anything else from your experience that we should change JMS?
> > 
> >  Thanks sir,
> >  S
> > 
> > 
> >  On Tue, Dec 19, 2017 at 1:47 PM, Jean-Marc Spaggiari <
> >  jean-m...@spaggiari.org> wrote:
> > 
> > > Can we get all tables by default Snappy compressed? I think because
> > of
> > >>> the
> > > license we can not, right? Just asking, in case there is an option
> > for
> > > that... Also +1 on balancing by table...
> > >
> > > 2017-12-18 17:34 GMT-05:00 Stack :
> > >
> > >> (I thought I'd already posted a DISCUSSION on defaults for 2.0.0
> but
> > > can't
> > >> find it...)
> > >>
> > >> Dear All:
> > >>
> > >> I'm trying to get some eyeballs/thoughts on changes you'd like
> seen
> > >> in
> > >> hbase defaults for hbase-2.0.0. We have a an ISSUE and some good
> > > discussion
> > >> already up at HBASE-19148.
> > >>
> > >> A good case is being made for enabling balancing by table as
> > >> default.
> > >>
> > >> Guanghao Zhang has already put in place more sensible
> retry/timeout
> > >> numbers.
> > >>
> > >> Anything else we should change? Shout here or up on the issue.
> > >>
> > >> Thanks,
> > >> S
> > >>
> > >
> > 
> > 
> > >>>
> > >>
> >
>



-- 
Best regards,
Andrew

Words like orphans lost among the crosstalk, meaning torn from truth's
decrepit hands
   - A23, Crosstalk


[jira] [Created] (HBASE-19584) hbase-thirdparty L&N refer to items not actually in the src release

2017-12-21 Thread Mike Drob (JIRA)
Mike Drob created HBASE-19584:
-

 Summary: hbase-thirdparty L&N refer to items not actually in the 
src release
 Key: HBASE-19584
 URL: https://issues.apache.org/jira/browse/HBASE-19584
 Project: HBase
  Issue Type: Bug
Reporter: Mike Drob


>From [~elserj]'s vote on 2.0-RC0:

{quote}
* L&N not entirely accurate, IMO. They state that things are included in the 
src release which are not. I think it would be more appropriate to push the 
relevant information down into src/main/apppended-resources for each module 
(e.g. hbase-shaded-protobuf would have 
src/main/appended-resources/{LICENSE,NOTICE}) which have the relevant L&N 
content for the products being bundled. Thus, we'd have nothing in the 3rdparty 
L&N which reflects the src release.
{quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSSION] Default configurations in hbase-2.0.0 hbase-default.xml

2017-12-21 Thread Stack
On Thu, Dec 21, 2017 at 10:33 AM, Mike Drob  wrote:

> I'm not super comfortable with that...
>
> If it has a spotty record like you suggest, then I don't want to be ironing
> out issues with it so close to beta/release. There's already enough to iron
> out and we're so close to the end that I don't want to risk destabilizing
> at this point...
>
>

Fair point. Let me undo it.

I just came across this nice doc on region normalizer [1] by our brother
Romil Choksi (Can we get this added to the refguide? I could copy/paste it
if folks don't mind). Reading it, it seems like it is easy doing an enable
and a one-off run so it is a low-barrier trying it. We need to talk this
tool up when folks are in situation where they have ill-filled regions and
want to merge adjacents. Meantime, try it a few times ...

S

1.
https://community.hortonworks.com/articles/54987/hbase-region-normalizer.html



> Mike
>
> On Thu, Dec 21, 2017 at 11:50 AM, Stack  wrote:
>
> > There's been a request to enable region normalization by default. Sounds
> > reasonable to me. Any objections? I think normalization has spotted
> record
> > so far. Enabling it we can try and iron out and issues with it before we
> do
> > the hbase 2.0.0 RC. If not possible, can disable before RC.
> >
> > Thanks,
> > S
> >
> > On Mon, Dec 18, 2017 at 2:34 PM, Stack  wrote:
> >
> > > (I thought I'd already posted a DISCUSSION on defaults for 2.0.0 but
> > can't
> > > find it...)
> > >
> > > Dear All:
> > >
> > > I'm trying to get some eyeballs/thoughts on changes you'd like seen in
> > > hbase defaults for hbase-2.0.0. We have a an ISSUE and some good
> > discussion
> > > already up at HBASE-19148.
> > >
> > > A good case is being made for enabling balancing by table as default.
> > >
> > > Guanghao Zhang has already put in place more sensible retry/timeout
> > > numbers.
> > >
> > > Anything else we should change? Shout here or up on the issue.
> > >
> > > Thanks,
> > > S
> > >
> > >
> > >
> >
>


[jira] [Resolved] (HBASE-19571) Minor refactor of Nightly run scripts

2017-12-21 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy resolved HBASE-19571.
--
   Resolution: Fixed
Fix Version/s: 2.0.0-beta-1

> Minor refactor of Nightly run scripts
> -
>
> Key: HBASE-19571
> URL: https://issues.apache.org/jira/browse/HBASE-19571
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19571.master.001.patch
>
>
> Was trying to split out common code in separate lib, but it's not at all 
> trivial. The way to do it for declarative syntax Jenkinsfile is by using 
> Shared Libraries which requires separate repo!
> The patch ended up being just naming refactors.
> Renames OUTPUTDIR to OUTPUT_DIR
> Renames OUTPUT_RELATIVE to OUTPUT_DIR_RELATIVE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19583) Delete EOL branches - branch-1.0 and branch-1.1

2017-12-21 Thread Appy (JIRA)
Appy created HBASE-19583:


 Summary: Delete EOL branches - branch-1.0 and branch-1.1
 Key: HBASE-19583
 URL: https://issues.apache.org/jira/browse/HBASE-19583
 Project: HBase
  Issue Type: Bug
Reporter: Appy
Priority: Minor


wdys [~enis] [~ndimiduk]?
This needs updating too - http://hbase.apache.org/book.html#_release_managers.
I think we should mention RMs for all branches, even eols, if only in other 
table. Good way to recognize past efforts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSSION] Default configurations in hbase-2.0.0 hbase-default.xml

2017-12-21 Thread Mike Drob
I'm not super comfortable with that...

If it has a spotty record like you suggest, then I don't want to be ironing
out issues with it so close to beta/release. There's already enough to iron
out and we're so close to the end that I don't want to risk destabilizing
at this point...

Mike

On Thu, Dec 21, 2017 at 11:50 AM, Stack  wrote:

> There's been a request to enable region normalization by default. Sounds
> reasonable to me. Any objections? I think normalization has spotted record
> so far. Enabling it we can try and iron out and issues with it before we do
> the hbase 2.0.0 RC. If not possible, can disable before RC.
>
> Thanks,
> S
>
> On Mon, Dec 18, 2017 at 2:34 PM, Stack  wrote:
>
> > (I thought I'd already posted a DISCUSSION on defaults for 2.0.0 but
> can't
> > find it...)
> >
> > Dear All:
> >
> > I'm trying to get some eyeballs/thoughts on changes you'd like seen in
> > hbase defaults for hbase-2.0.0. We have a an ISSUE and some good
> discussion
> > already up at HBASE-19148.
> >
> > A good case is being made for enabling balancing by table as default.
> >
> > Guanghao Zhang has already put in place more sensible retry/timeout
> > numbers.
> >
> > Anything else we should change? Shout here or up on the issue.
> >
> > Thanks,
> > S
> >
> >
> >
>


[jira] [Created] (HBASE-19582) Tags on append doesn't behave like expected

2017-12-21 Thread Jean-Marc Spaggiari (JIRA)
Jean-Marc Spaggiari created HBASE-19582:
---

 Summary: Tags on append doesn't behave like expected
 Key: HBASE-19582
 URL: https://issues.apache.org/jira/browse/HBASE-19582
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 2.0.0-alpha-4
Reporter: Jean-Marc Spaggiari


When appending a tag an HBase cell, they seems to not really be append be live 
their own life. In the example below, I put a cell, append the TTL, and we can 
see between the 2 scans that only the TTL append cell expires. I was expecting 
those 2 cells to become one and expire together.

[code]
hbase(main):082:0> put 't1', 'r1', 'f1:c1', 'value'
0 row(s) in 0.1350 seconds

hbase(main):083:0> append 't1', 'r1', 'f1:c1', '', { TTL => 5000 }
0 row(s) in 0.0080 seconds

hbase(main):084:0> scan 't1'
ROW   COLUMN+CELL   

   
 r1   column=f1:c1, 
timestamp=1513879615014, value=value
   
1 row(s) in 0.0730 seconds

hbase(main):085:0> scan 't1'
ROW   COLUMN+CELL   

   
 r1   column=f1:c1, 
timestamp=1513879599375, value=value
   
1 row(s) in 0.0500 seconds
[code]




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSSION] Default configurations in hbase-2.0.0 hbase-default.xml

2017-12-21 Thread Stack
There's been a request to enable region normalization by default. Sounds
reasonable to me. Any objections? I think normalization has spotted record
so far. Enabling it we can try and iron out and issues with it before we do
the hbase 2.0.0 RC. If not possible, can disable before RC.

Thanks,
S

On Mon, Dec 18, 2017 at 2:34 PM, Stack  wrote:

> (I thought I'd already posted a DISCUSSION on defaults for 2.0.0 but can't
> find it...)
>
> Dear All:
>
> I'm trying to get some eyeballs/thoughts on changes you'd like seen in
> hbase defaults for hbase-2.0.0. We have a an ISSUE and some good discussion
> already up at HBASE-19148.
>
> A good case is being made for enabling balancing by table as default.
>
> Guanghao Zhang has already put in place more sensible retry/timeout
> numbers.
>
> Anything else we should change? Shout here or up on the issue.
>
> Thanks,
> S
>
>
>


[jira] [Created] (HBASE-19581) Fix Checkstyle error in hbase-external-blockcache

2017-12-21 Thread Jan Hentschel (JIRA)
Jan Hentschel created HBASE-19581:
-

 Summary: Fix Checkstyle error in hbase-external-blockcache
 Key: HBASE-19581
 URL: https://issues.apache.org/jira/browse/HBASE-19581
 Project: HBase
  Issue Type: Sub-task
Reporter: Jan Hentschel
Assignee: Jan Hentschel
Priority: Trivial


Fix the remaining Checkstyle error in the *hbase-external-blockcache* module 
and enable Checkstyle to fail on violations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-19580) Use slf4j instead of commons-logging

2017-12-21 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-19580.
---
Resolution: Fixed

The patch is straight forward. Rebased and pushed to branch HBASE-19397.

> Use slf4j instead of commons-logging
> 
>
> Key: HBASE-19580
> URL: https://issues.apache.org/jira/browse/HBASE-19580
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Replication
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Attachments: HBASE-19580.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-10092) Move to slf4j

2017-12-21 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-10092.
---
Resolution: Fixed

Pushed the addendum to master and branch-2.

> Move to slf4j
> -
>
> Key: HBASE-10092
> URL: https://issues.apache.org/jira/browse/HBASE-10092
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Balazs Meszaros
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 10092.txt, 10092v2.txt, HBASE-10092-addendum.patch, 
> HBASE-10092-preview-v0.patch, HBASE-10092.master.001.patch, 
> HBASE-10092.master.002.patch, HBASE-10092.master.003.patch, 
> HBASE-10092.master.004.patch, HBASE-10092.patch
>
>
> Allows logging with less friction.  See http://logging.apache.org/log4j/2.x/  
> This rather radical transition can be done w/ minor change given they have an 
> adapter for apache's logging, the one we use.  They also have and adapter for 
> slf4j so we likely can remove at least some of the 4 versions of this module 
> our dependencies make use of.
> I made a start in attached patch but am currently stuck in maven dependency 
> resolve hell courtesy of our slf4j.  Fixing will take some concentration and 
> a good net connection, an item I currently lack.  Other TODOs are that will 
> need to fix our little log level setting jsp page -- will likely have to undo 
> our use of hadoop's tool here -- and the config system changes a little.
> I will return to this project soon.  Will bring numbers.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (HBASE-10092) Move to slf4j

2017-12-21 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reopened HBASE-10092:
---

TestAssignProcedure and TestWALEntrySinkFilter are still on commons-logging. 
Reopen to push a simple addendum.

> Move to slf4j
> -
>
> Key: HBASE-10092
> URL: https://issues.apache.org/jira/browse/HBASE-10092
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: Balazs Meszaros
>Priority: Critical
> Fix For: 2.0.0-beta-1
>
> Attachments: 10092.txt, 10092v2.txt, HBASE-10092-preview-v0.patch, 
> HBASE-10092.master.001.patch, HBASE-10092.master.002.patch, 
> HBASE-10092.master.003.patch, HBASE-10092.master.004.patch, HBASE-10092.patch
>
>
> Allows logging with less friction.  See http://logging.apache.org/log4j/2.x/  
> This rather radical transition can be done w/ minor change given they have an 
> adapter for apache's logging, the one we use.  They also have and adapter for 
> slf4j so we likely can remove at least some of the 4 versions of this module 
> our dependencies make use of.
> I made a start in attached patch but am currently stuck in maven dependency 
> resolve hell courtesy of our slf4j.  Fixing will take some concentration and 
> a good net connection, an item I currently lack.  Other TODOs are that will 
> need to fix our little log level setting jsp page -- will likely have to undo 
> our use of hadoop's tool here -- and the config system changes a little.
> I will return to this project soon.  Will bring numbers.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19580) Use slf4j instead of commons-logging

2017-12-21 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-19580:
-

 Summary: Use slf4j instead of commons-logging
 Key: HBASE-19580
 URL: https://issues.apache.org/jira/browse/HBASE-19580
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19579) Add peer lock test for shell command list_locks

2017-12-21 Thread Duo Zhang (JIRA)
Duo Zhang created HBASE-19579:
-

 Summary: Add peer lock test for shell command list_locks
 Key: HBASE-19579
 URL: https://issues.apache.org/jira/browse/HBASE-19579
 Project: HBase
  Issue Type: Sub-task
Reporter: Duo Zhang






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-19544) Add UTs for testing concurrent modifications on replication peer

2017-12-21 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-19544.
---
Resolution: Duplicate

HBASE-19520 has already added the concurrent test. Resolve as duplicated.

> Add UTs for testing concurrent modifications on replication peer
> 
>
> Key: HBASE-19544
> URL: https://issues.apache.org/jira/browse/HBASE-19544
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Replication, test
>Reporter: Duo Zhang
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: Request to Join Slack Channel

2017-12-21 Thread Jean-Marc Spaggiari
Me too please ;)

Thanks,

JMS

2017-12-20 20:54 GMT-05:00 Philippe Laflamme :

> Hi,
>
> I'd like to join the slack discussion and the guide mentions writing to
> this address to obtain an invite. May I obtain an invite please?
>
> Thanks,
> Philippe
>


[jira] [Created] (HBASE-19578) MasterProcWALs cleaning is incorrect

2017-12-21 Thread Peter Somogyi (JIRA)
Peter Somogyi created HBASE-19578:
-

 Summary: MasterProcWALs cleaning is incorrect
 Key: HBASE-19578
 URL: https://issues.apache.org/jira/browse/HBASE-19578
 Project: HBase
  Issue Type: Bug
  Components: amv2
Affects Versions: 2.0.0-alpha-4
Reporter: Peter Somogyi
Assignee: Peter Somogyi
Priority: Critical
 Fix For: 2.0.0-beta-1


Pattern used for MasterProcWALs cleaning is incorrect. The logs are deleted 
from oldWALs directory as invalid files.

2017-12-21 11:32:37,980 WARN  [ForkJoinPool-1-worker-2] cleaner.CleanerChore: 
Found a wrongly formatted file: 
file:/Users/peter.somogyi/tmp/hbase/oldWALs/pv2-0001.log - will 
delete it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [ANNOUNCE] Please welcome new HBase committer YI Liang

2017-12-21 Thread Yu Li
Congratulations!

Best Regards,
Yu

On 21 December 2017 at 16:07, Chia-Ping Tsai  wrote:

> Congratulations , and welcome !!!
>
> On 2017-12-21 08:06, Jerry He  wrote:
> > On behalf of the Apache HBase PMC, I am pleased to announce that
> > Yi Liang has accepted the PMC's invitation to become a committer
> > on the project.
> >
> > We appreciate all of Yi's great work thus far and look forward to
> > his continued involvement.
> >
> > Please join me in congratulating Yi!
> >
> > --
> > Thanks,
> > Jerry
> >
>


Re: [DISCUSSION] Default configurations in hbase-2.0.0 hbase-default.xml

2017-12-21 Thread C Reid
.gz (GzipCodec) or .deflate (DeflateCodec).


From: saint@gmail.com  on behalf of Stack 

Sent: 21 December 2017 15:01:21
To: HBase Dev List
Subject: Re: [DISCUSSION] Default configurations in hbase-2.0.0 
hbase-default.xml

On Tue, Dec 19, 2017 at 8:18 PM, Andrew Purtell 
wrote:

> Is there an option with a pure Java fallback if the native codec isn't
> available? I mean something reasonable, not bzip2.
>
>
>
Yeah, what Andrew says...
S




> > On Dec 19, 2017, at 6:16 PM, Dave Latham  wrote:
> >
> > What about LZ4 instead?  Most benchmarks I've seen show it ahead of
> Snappy.
> >
> >> On Tue, Dec 19, 2017 at 5:55 PM, Mike Drob  wrote:
> >>
> >> Can you file a JIRA for some kind of magical default
> snappy-if-available?
> >>
> >>> On Tue, Dec 19, 2017 at 7:38 PM, Stack  wrote:
> >>>
>  On Tue, Dec 19, 2017 at 4:22 PM, Stack  wrote:
> 
>  Thanks for jumping in JMS. Ok on the by-table.
> 
>  SNAPPY license seems fine. We'd enable it as default when you create a
>  table? Let me play w/ it.
> 
> 
> >>> Oh. I forgot what happens if the native lib is not available, how
> cluster
> >>> goes down.
> >>>
> >>> Caused by: java.lang.RuntimeException: native snappy library not
> >> available:
> >>> this version of libhadoop was built without snappy support.
> >>>
> >>> I think we should skip out on enabling this (but recommend folks run
> this
> >>> way...)
> >>>
> >>> Thanks JMS,
> >>> S
> >>>
> >>>
> >>>
>  Anything else from your experience that we should change JMS?
> 
>  Thanks sir,
>  S
> 
> 
>  On Tue, Dec 19, 2017 at 1:47 PM, Jean-Marc Spaggiari <
>  jean-m...@spaggiari.org> wrote:
> 
> > Can we get all tables by default Snappy compressed? I think because
> of
> >>> the
> > license we can not, right? Just asking, in case there is an option
> for
> > that... Also +1 on balancing by table...
> >
> > 2017-12-18 17:34 GMT-05:00 Stack :
> >
> >> (I thought I'd already posted a DISCUSSION on defaults for 2.0.0 but
> > can't
> >> find it...)
> >>
> >> Dear All:
> >>
> >> I'm trying to get some eyeballs/thoughts on changes you'd like seen
> >> in
> >> hbase defaults for hbase-2.0.0. We have a an ISSUE and some good
> > discussion
> >> already up at HBASE-19148.
> >>
> >> A good case is being made for enabling balancing by table as
> >> default.
> >>
> >> Guanghao Zhang has already put in place more sensible retry/timeout
> >> numbers.
> >>
> >> Anything else we should change? Shout here or up on the issue.
> >>
> >> Thanks,
> >> S
> >>
> >
> 
> 
> >>>
> >>
>


Re: [ANNOUNCE] Please welcome new HBase committer YI Liang

2017-12-21 Thread Chia-Ping Tsai
Congratulations , and welcome !!!

On 2017-12-21 08:06, Jerry He  wrote: 
> On behalf of the Apache HBase PMC, I am pleased to announce that
> Yi Liang has accepted the PMC's invitation to become a committer
> on the project.
> 
> We appreciate all of Yi's great work thus far and look forward to
> his continued involvement.
> 
> Please join me in congratulating Yi!
> 
> --
> Thanks,
> Jerry
> 


Re: Cleanup and remove the code path where is no hbase.client.rpc.codec

2017-12-21 Thread Stack
On Tue, Dec 19, 2017 at 7:02 PM, Jerry He  wrote:

> RPC_CODEC_CONF_KEY 'hbase.client.rpc.codec' is a property we use on the
> client side to determine the RPC codec.
>
> It currently has a strange logic. Whereas the default is KeyValueCodec, we
> allow  a user to specify an empty string "" as the a way to indicate there
> is no codec class and we should not use any.
>
>   Codec getCodec() {
> // For NO CODEC, "hbase.client.rpc.codec" must be configured with empty
> string AND
> // "hbase.client.default.rpc.codec" also -- because default is to do
> cell block encoding.
> String className = conf.get(HConstants.RPC_CODEC_CONF_KEY,
> getDefaultCodec(this.conf));
> if (className == null || className.length() == 0) {
>   return null;
> }
> try {
>   return (Codec) Class.forName(className).newInstance();
> } catch (Exception e) {
>   throw new RuntimeException("Failed getting codec " + className, e);
> }
>   }
>
> I don't know the original reason for having this.
>

IIRC, when we moved to pb's first, KeyValues were all pb'd. This was our
default serialization.

It was too slow -- duh -- so we had to figure something else. We came up w/
notion of pb being used to describe the content of the RPC but that the
actual cells would follow-behind the pb in a 'cellblock' (We used to show a
picture of a motorcycle with a sidecar as an illustration trying to convey
that the follow-behind appendage was like a 'sidecar' that went with the
RPC message). We went out of our way to ensure we allowed shipping both
forms of message -- with sidecar and without with KVs PB encoded. The
latter would be useful for not native clients, at least while trying to get
off the ground.

The above code came in with:

tree 6c91d2f4ee7faadea35b238418fcd6b5051e37f5
parent 823656bf8372e55b5b4a81e72921cb78b0be96d7
author stack  Mon Dec 8 15:23:38 2014 -0800
committer stack  Mon Dec 8 15:23:38 2014 -0800

HBASE-12597 Add RpcClient interface and enable changing of RpcClient
implementation (Jurriaan Mous)

... where Jurriaan started in on a client-side refactor whose intent was
being able to slot in an async client.

Before that was HBASE-10322 which added RPC_CODEC_CONF_KEY. Previous to
this again, IIRC, Anoop I believe noticed that we werent' cellblocking by
default and fixed it.

We've been cellblocking by default ever since.


> The consequence of this 'no codec' is that we will pb all RPC payload and
> not using cell blocks.
>
>
Exactly.

I used to think it critical we support this mode for the python messers or
whoever who wanted to put together a quick client; they wouldn't have to
rig a non-native cellblock decoder. Rare if ever has anyone made use of
this facility it seems.



> In the test cases, after these many releases, there is no test that
> excercises this special case.
> The code path we test are mostly with a valid or default
> 'hbase.client.rpc.codec'.
> The other code path is probably sitting there rotten.
>
> For example,
>
> In MultiServerCallable:
>
>   if (this.cellBlock) {
> // Build a multi request absent its Cell payload. Send data in
> cellblocks.
> regionActionBuilder =
> RequestConverter.buildNoDataRegionAction(regionName,
> rms, cells,
>   regionActionBuilder, actionBuilder, mutationBuilder);
>   } else {
> regionActionBuilder =
> RequestConverter.buildRegionAction(regionName,
> rms);   ==> Will not be exercised in test..
>   }
>
> Proposal:
>
> We remove this 'no hbase.rpc.codec' case and all dependent logic. There is
> a default and user can overwrite the default, but have to provide a valid
> non-empty value.
>

Only objection is that it makes it harder writing non-native client. With
all pb encoded, you could do a first-cut easy enough w/o having to do
non-java decoder for our cryptic cellblock packing mechanism.



> Then we can clean up the code where we choose between pb or no pb.  We will
> always do cell block in these cases.
>
> There are cases where we currently only do pb, like some of the individual
> ops (append, increment, mutateRow, etc). We can revisit to see if they can
> be non-pb'ed.
>
> The proposed change only cleans up the client side (PRC client).
> I want to keep the server side handling of pb and no-pb both for now, so
> that the server can accommodate a 'no hbase.rpc.codec' connection request
> for now for backward compatibility.
>
>
This is an arg for upping coverage for the pure-pb case and for not
removing our client's ability to ask for this encoding?

Thanks Jerry for bringing this up.
S


> Any concerns?
>
> Thanks.
>
> Jerry
>