[jira] [Commented] (HBASE-15593) Time limit of scanning should be offered by client

2016-04-14 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242517#comment-15242517
 ] 

Duo Zhang commented on HBASE-15593:
---

Got it sir.
Have a good night.

> Time limit of scanning should be offered by client
> --
>
> Key: HBASE-15593
> URL: https://issues.apache.org/jira/browse/HBASE-15593
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.4
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2
>
> Attachments: HBASE-15593-branch-1-v1.patch, 
> HBASE-15593-branch-1-v2.patch, HBASE-15593-branch-1.1-v1.patch, 
> HBASE-15593-branch-1.1-v2.patch, HBASE-15593-branch-1.2-v1.patch, 
> HBASE-15593-branch-1.2-v2.patch, HBASE-15593-v1.patch, HBASE-15593-v2.patch, 
> HBASE-15593-v3.patch, HBASE-15593-v4.patch, HBASE-15593-v5.patch, 
> HBASE-15593-v6.patch
>
>
> In RSRpcServices.scan, we will set a time limit equaling to 
> Math.min(scannerLeaseTimeoutPeriod, rpcTimeout) / 2, and will response 
> heartbeat message if we reach this limit. However, two timeout settings 
> (hbase.client.scanner.timeout.period and hbase.rpc.timeout) are read from 
> RS's configure, which may be different from client's. If client's setting is 
> much less than server's, there may still be timeout at client side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15292) Refined ZooKeeperWatcher to prevent ZooKeeper's callback while construction

2016-04-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242513#comment-15242513
 ] 

Hadoop QA commented on HBASE-15292:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
6s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
56s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
2s {color} | {color:green} hbase-client: patch generated 0 new + 40 unchanged - 
1 fixed = 40 total (was 41) {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 8s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 39s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798899/HBASE-15292-V5.patch |
| JIRA Issue | HBASE-15292 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7efb9ed |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache

[jira] [Commented] (HBASE-15593) Time limit of scanning should be offered by client

2016-04-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242514#comment-15242514
 ] 

stack commented on HBASE-15593:
---

Sorry [~Apache9] and [~yangzhe1991]. This is on my list. I was going to try it. 
Will look in morning. Thanks for your patience.

> Time limit of scanning should be offered by client
> --
>
> Key: HBASE-15593
> URL: https://issues.apache.org/jira/browse/HBASE-15593
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.4
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2
>
> Attachments: HBASE-15593-branch-1-v1.patch, 
> HBASE-15593-branch-1-v2.patch, HBASE-15593-branch-1.1-v1.patch, 
> HBASE-15593-branch-1.1-v2.patch, HBASE-15593-branch-1.2-v1.patch, 
> HBASE-15593-branch-1.2-v2.patch, HBASE-15593-v1.patch, HBASE-15593-v2.patch, 
> HBASE-15593-v3.patch, HBASE-15593-v4.patch, HBASE-15593-v5.patch, 
> HBASE-15593-v6.patch
>
>
> In RSRpcServices.scan, we will set a time limit equaling to 
> Math.min(scannerLeaseTimeoutPeriod, rpcTimeout) / 2, and will response 
> heartbeat message if we reach this limit. However, two timeout settings 
> (hbase.client.scanner.timeout.period and hbase.rpc.timeout) are read from 
> RS's configure, which may be different from client's. If client's setting is 
> much less than server's, there may still be timeout at client side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15593) Time limit of scanning should be offered by client

2016-04-14 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242508#comment-15242508
 ] 

Duo Zhang commented on HBASE-15593:
---

[~stack] ping.

> Time limit of scanning should be offered by client
> --
>
> Key: HBASE-15593
> URL: https://issues.apache.org/jira/browse/HBASE-15593
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0, 1.1.4
>Reporter: Phil Yang
>Assignee: Phil Yang
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.1.5, 1.2.2
>
> Attachments: HBASE-15593-branch-1-v1.patch, 
> HBASE-15593-branch-1-v2.patch, HBASE-15593-branch-1.1-v1.patch, 
> HBASE-15593-branch-1.1-v2.patch, HBASE-15593-branch-1.2-v1.patch, 
> HBASE-15593-branch-1.2-v2.patch, HBASE-15593-v1.patch, HBASE-15593-v2.patch, 
> HBASE-15593-v3.patch, HBASE-15593-v4.patch, HBASE-15593-v5.patch, 
> HBASE-15593-v6.patch
>
>
> In RSRpcServices.scan, we will set a time limit equaling to 
> Math.min(scannerLeaseTimeoutPeriod, rpcTimeout) / 2, and will response 
> heartbeat message if we reach this limit. However, two timeout settings 
> (hbase.client.scanner.timeout.period and hbase.rpc.timeout) are read from 
> RS's configure, which may be different from client's. If client's setting is 
> much less than server's, there may still be timeout at client side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15659) metrics about the Stripe infomation

2016-04-14 Thread chenxu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chenxu updated HBASE-15659:
---
Attachment: HBASE-15659-master-v1.patch

the metric is under *Hadoop:service=HBase,name=RegionServer,sub=Regions*
and the value format is:
bq. _level_0,;_stripe_,...

> metrics about the Stripe infomation
> ---
>
> Key: HBASE-15659
> URL: https://issues.apache.org/jira/browse/HBASE-15659
> Project: HBase
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 1.1.3
>Reporter: chenxu
>Assignee: chenxu
>Priority: Minor
> Attachments: HBASE-15659-master-v1.patch
>
>
> when the stripe compaction is enabled, there are no infomation about:
> {quote}
> how many Stripes per Store have, and
> how many StoreFiles per Stripe have
> {quote}
> the metrics should be suppled



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15659) metrics about the Stripe infomation

2016-04-14 Thread chenxu (JIRA)
chenxu created HBASE-15659:
--

 Summary: metrics about the Stripe infomation
 Key: HBASE-15659
 URL: https://issues.apache.org/jira/browse/HBASE-15659
 Project: HBase
  Issue Type: New Feature
  Components: metrics
Affects Versions: 1.1.3
Reporter: chenxu
Assignee: chenxu
Priority: Minor


when the stripe compaction is enabled, there are no infomation about:
{quote}
how many Stripes per Store have, and
how many StoreFiles per Stripe have
{quote}
the metrics should be suppled



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-04-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242485#comment-15242485
 ] 

stack commented on HBASE-15638:
---

Grand

> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
> Attachments: 15638v2.patch, as.far.as.server.patch
>
>
> Shade protobufs so we can move to a different version without breaking the 
> world. We want to get up on pb3 because it has unsafe methods that allow us 
> save on copies; it also has some means of dealing with BBs so we can pass it 
> offheap DBBs. We'll probably want to change PB3 to open it up some more too 
> so we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] 
> and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular 
> in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>   if (payload != null) {
> builder.setPayload(ByteString.copyFrom(payload));
>   }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as 
> pb. Test at least.
> Let me raise this one on the dev list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15292) Refined ZooKeeperWatcher to prevent ZooKeeper's callback while construction

2016-04-14 Thread Hiroshi Ikeda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroshi Ikeda updated HBASE-15292:
--
Attachment: HBASE-15292-V5.patch

Added a revised patch;
Refactored to suppress the findbug warning, and added some javadoc (I hope 
there are not so wrong sentences or words :)

> Refined ZooKeeperWatcher to prevent ZooKeeper's callback while construction
> ---
>
> Key: HBASE-15292
> URL: https://issues.apache.org/jira/browse/HBASE-15292
> Project: HBase
>  Issue Type: Bug
>  Components: Zookeeper
>Reporter: Hiroshi Ikeda
>Assignee: Hiroshi Ikeda
>Priority: Minor
> Attachments: HBASE-15292-V2.patch, HBASE-15292-V3.patch, 
> HBASE-15292-V4.patch, HBASE-15292-V5.patch, HBASE-15292.patch
>
>
> The existing code is not just messy but also contains a subtle bug of 
> visibility due to missing synchronization between threads.
> The root of the evil is that ZooKeeper uses a silly anti-pattern, starting a 
> thread within its constructor, and in practice all the developers are not 
> allowed to use ZooKeeper correctly without tedious code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15651) Track our flaky tests and use them to improve our build environment

2016-04-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242470#comment-15242470
 ] 

stack commented on HBASE-15651:
---

Script is missing apache license.

Does this if I pass it general Trunk URL:

kalashnikov:hbase.git stack$ python ./dev-support/flakies.py 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/848/
Traceback (most recent call last):
  File "./dev-support/flakies.py", line 36, in 
run_id_to_results[run["number"]] = get_build_results(run["url"])
  File "./dev-support/flakies.py", line 20, in get_build_results
for test_cases in json_response["suites"]:
KeyError: 'suites'

Otherwise, it is excellent.  I can check it in and fix the apache license if 
you want or do you want to do it and address the above (tell user what URL you 
are expecting?)

We should probably remove the ./dev-tools/jenkins-tools/  They used to do 
something like this:

{code}
 22 A tool which pulls test case results from Jenkins server. It displays a 
union of failed test cases
 23 from the last 15(by default and actual number of jobs can be less depending 
on availablity) runs
 24 recorded in Jenkins sever and track how each of them are performed for all 
the last 15 runs(passed,
 25 not run or failed)
{code}

Here is an example run:

{code}
kalashnikov:hbase.git stack$ python ./dev-support/flakies.py 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/848/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/847/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/846/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/845/
No test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/845/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/844/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/843/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/842/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/841/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/840/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/839/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/838/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/837/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/836/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/835/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/834/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/833/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/832/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/831/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/830/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/829/
Getting test results for 
https://builds.apache.org/view/All/job/HBase-Trunk_matrix/jdk=latest1.7,label=yahoo-not-h2/745/

   Test Name  Failed  Total Runs  Flakyness
   
client.TestMetaCache#testPreserveMetaCacheOnException   1  18   6%
   
ipc.TestAsyncIPC#testRTEDuringAsyncConnectionSetup[0]   1  20   5%
  
security.visibility.TestVisibilityLabelsWithACL#testVisibilityLabelsForUserWithNoAuths
   1  19   5%
 
regionserver.TestRegionMergeTransactionOnCluster#testWholesomeMerge   2 
 19  11%

coprocessor.

[jira] [Commented] (HBASE-15638) Shade protobuf

2016-04-14 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242463#comment-15242463
 ] 

Duo Zhang commented on HBASE-15638:
---

OK, I got your point. There would be some maven tricks. We need to pull in 
pb3.0 in hbase-protocol and do not propagate it to other sub-modules since we 
have shaded it in the hbase-protocol artifact. I can help testing whether it 
works in eclipse.

Thanks.

> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
> Attachments: 15638v2.patch, as.far.as.server.patch
>
>
> Shade protobufs so we can move to a different version without breaking the 
> world. We want to get up on pb3 because it has unsafe methods that allow us 
> save on copies; it also has some means of dealing with BBs so we can pass it 
> offheap DBBs. We'll probably want to change PB3 to open it up some more too 
> so we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] 
> and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular 
> in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>   if (payload != null) {
> builder.setPayload(ByteString.copyFrom(payload));
>   }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as 
> pb. Test at least.
> Let me raise this one on the dev list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-04-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242459#comment-15242459
 ] 

stack commented on HBASE-15638:
---

Thanks [~busbey] Let me look at where we are making direct reference to 
com.google.protobuf outside of hbase-protocol (Apart from the protos in rest 
and spark, I think its rpc that is the culprit ). Let me see what I can 
move back. Let me see how far I get.

On the uses-relocated-protobuf suggestion, let me see list of items we'd need 
to disentangle and what would be involved. My first reaction is that  profile 
for some modules might be a little opaque in its workings but let me turn it 
over as I bang my head here.

Thanks.

> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
> Attachments: 15638v2.patch, as.far.as.server.patch
>
>
> Shade protobufs so we can move to a different version without breaking the 
> world. We want to get up on pb3 because it has unsafe methods that allow us 
> save on copies; it also has some means of dealing with BBs so we can pass it 
> offheap DBBs. We'll probably want to change PB3 to open it up some more too 
> so we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] 
> and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular 
> in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>   if (payload != null) {
> builder.setPayload(ByteString.copyFrom(payload));
>   }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as 
> pb. Test at least.
> Let me raise this one on the dev list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-04-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242452#comment-15242452
 ] 

stack commented on HBASE-15638:
---

shade happens at the package stage... which is before unit test and the 
hbase-server depends on hbase-protocol so, ok?

For asyncwal calling an HDFS method that expects a PB2.5 ByteString, I think we 
are good. We refer to com.google.protobuf.BS explicity and we'll use the 
transitively included BS that was brought in by the hdfs jar (I think this will 
work).

> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
> Attachments: 15638v2.patch, as.far.as.server.patch
>
>
> Shade protobufs so we can move to a different version without breaking the 
> world. We want to get up on pb3 because it has unsafe methods that allow us 
> save on copies; it also has some means of dealing with BBs so we can pass it 
> offheap DBBs. We'll probably want to change PB3 to open it up some more too 
> so we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] 
> and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular 
> in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>   if (payload != null) {
> builder.setPayload(ByteString.copyFrom(payload));
>   }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as 
> pb. Test at least.
> Let me raise this one on the dev list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-04-14 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242447#comment-15242447
 ] 

Duo Zhang commented on HBASE-15638:
---

Yes, let's only reference the classes in com.google.protobuf in hbase-protocol. 
The asyncwal is an exception and the only problem maybe that the unit tests 
will be broken if the surefire task is executed before shade task? And I'm sure 
that in this way you can not execute unit test in the stupid eclipse IDE... So 
this maybe an advantage of change the import directly in source? To be 
compatible with some IDEs...

Thanks.

> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
> Attachments: 15638v2.patch, as.far.as.server.patch
>
>
> Shade protobufs so we can move to a different version without breaking the 
> world. We want to get up on pb3 because it has unsafe methods that allow us 
> save on copies; it also has some means of dealing with BBs so we can pass it 
> offheap DBBs. We'll probably want to change PB3 to open it up some more too 
> so we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] 
> and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular 
> in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>   if (payload != null) {
> builder.setPayload(ByteString.copyFrom(payload));
>   }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as 
> pb. Test at least.
> Let me raise this one on the dev list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-04-14 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242448#comment-15242448
 ] 

Sean Busbey commented on HBASE-15638:
-

{quote}
bq. ... then turning that into a shaded module with relocated protobufs in 
place is pretty straightforward and lets the source reference the original 
names.
Pardon me. Isn't that what this patch is doing?
{quote}

both of the patches posted have references to the shaded protobuf classes in 
e.g. hbase-client. That's the same fall out wether we do a dedicated artifact 
for the shaded protobuf version or roll it into hbase-protocol.

if all references to our internal use of protobuf are in the hbase-protocol 
module, then we needn't have any references to the relocated packages, because 
we can have the shade plugin take care of rewriting them just in that module 
while including the relocated classes that we use within the jar.

If we go the route of a module that relocated protobuf, we could make a profile 
in our project / top level pom "uses-relocated-protobuf". For those modules 
that activate it, we can then use the shade plugin to rewrite the references 
from the original protobuf to the shaded one. It would avoid moving classes 
into hbase-protocol and let us reference the original classes in source, but 
we'd need to move anything that has to deal with HDFS' protobuf out of modules 
that activate the profile.

> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
> Attachments: 15638v2.patch, as.far.as.server.patch
>
>
> Shade protobufs so we can move to a different version without breaking the 
> world. We want to get up on pb3 because it has unsafe methods that allow us 
> save on copies; it also has some means of dealing with BBs so we can pass it 
> offheap DBBs. We'll probably want to change PB3 to open it up some more too 
> so we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] 
> and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular 
> in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>   if (payload != null) {
> builder.setPayload(ByteString.copyFrom(payload));
>   }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as 
> pb. Test at least.
> Let me raise this one on the dev list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-04-14 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242440#comment-15242440
 ] 

Sean Busbey commented on HBASE-15638:
-

sorry, I wasn't careful with my language in the prior message. it should have 
been "our and only our" protobuf in the hbase-protocol module

> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
> Attachments: 15638v2.patch, as.far.as.server.patch
>
>
> Shade protobufs so we can move to a different version without breaking the 
> world. We want to get up on pb3 because it has unsafe methods that allow us 
> save on copies; it also has some means of dealing with BBs so we can pass it 
> offheap DBBs. We'll probably want to change PB3 to open it up some more too 
> so we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] 
> and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular 
> in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>   if (payload != null) {
> builder.setPayload(ByteString.copyFrom(payload));
>   }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as 
> pb. Test at least.
> Let me raise this one on the dev list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-04-14 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242439#comment-15242439
 ] 

Sean Busbey commented on HBASE-15638:
-

I thought we were stuck with a reference to the HDFS protobuf in hbase-protocol 
still?

> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
> Attachments: 15638v2.patch, as.far.as.server.patch
>
>
> Shade protobufs so we can move to a different version without breaking the 
> world. We want to get up on pb3 because it has unsafe methods that allow us 
> save on copies; it also has some means of dealing with BBs so we can pass it 
> offheap DBBs. We'll probably want to change PB3 to open it up some more too 
> so we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] 
> and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular 
> in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>   if (payload != null) {
> builder.setPayload(ByteString.copyFrom(payload));
>   }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as 
> pb. Test at least.
> Let me raise this one on the dev list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14898) Correct Bloom filter documentation in the book

2016-04-14 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242438#comment-15242438
 ] 

Jerry He commented on HBASE-14898:
--

Thanks for the patch.
Could you also update this part in section 96.4.2. Enabling Bloom Filters
{noformat}
Valid values are NONE (the default), ROW, or ROWCOL. 
{noformat}
The default is ROW.


> Correct Bloom filter documentation in the book
> --
>
> Key: HBASE-14898
> URL: https://issues.apache.org/jira/browse/HBASE-14898
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: yi liang
>Priority: Minor
> Attachments: bf.patch
>
>
> In section 96.4. Bloom Filters:
>  Since HBase 0.96, row-based Bloom filters are enabled by default. (HBASE-)  
> --> in *HBASE-8450*
> In section 94.4.3. Configuring Server-Wide Behavior of Bloom Filters: 
> io.hfile.bloom.enabled  --> *io.storefile.bloom.enabled*  Master switch to 
> enable Bloom filters
> io.hfile.bloom.max.fold  --> *io.storefile.bloom.max.fold*
> io.hfile.bloom.error.rate --> *io.storefile.bloom.error.rate*
> io.storefile.bloom.block.size --> *default is 128*1024 = 131072*
> These properties are probably not tuned usually, but should still be fixed in 
> the doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-04-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242437#comment-15242437
 ] 

stack commented on HBASE-15638:
---

bq. ... then turning that into a shaded module with relocated protobufs in 
place is pretty straightforward and lets the source reference the original 
names.

Pardon me. Isn't that what this patch is doing?

Let me move the rest and spark module protos back into hbase-protocol.


> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
> Attachments: 15638v2.patch, as.far.as.server.patch
>
>
> Shade protobufs so we can move to a different version without breaking the 
> world. We want to get up on pb3 because it has unsafe methods that allow us 
> save on copies; it also has some means of dealing with BBs so we can pass it 
> offheap DBBs. We'll probably want to change PB3 to open it up some more too 
> so we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] 
> and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular 
> in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>   if (payload != null) {
> builder.setPayload(ByteString.copyFrom(payload));
>   }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as 
> pb. Test at least.
> Let me raise this one on the dev list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15650) Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-04-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15650:
--
Attachment: 15650v6.patch

Remove the TODO.

The TODO was me thinking we were 'calculating' the final state of the 
TimeRangeTracker twice; once when we were writing the memstore and then again 
when we were flushing the file out.

I see that this is not the case. It is cryptic but we go out of our way to 
avoid doing this. I tried to make it so no one ended up having the 'same' idea 
all over again with comments elsewhere in v3 or so of the patch.

This version removes the TODO.

> Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile
> 
>
> Key: HBASE-15650
> URL: https://issues.apache.org/jira/browse/HBASE-15650
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 15650.branch-1.patch, 15650.patch, 15650.patch, 
> 15650v2.patch, 15650v3.patch, 15650v4.patch, 15650v5.patch, 15650v6.patch, 
> Point-of-contention-on-random-read.png
>
>
> HBASE-12148 is about "Remove TimeRangeTracker as point of contention when 
> many threads writing a Store". It is also a point of contention when reading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14898) Correct Bloom filter documentation in the book

2016-04-14 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-14898:
-
Description: 
In section 96.4. Bloom Filters:

 Since HBase 0.96, row-based Bloom filters are enabled by default. (HBASE-)  
--> in *HBASE-8450*

In section 94.4.3. Configuring Server-Wide Behavior of Bloom Filters: 

io.hfile.bloom.enabled  --> *io.storefile.bloom.enabled*  Master switch to 
enable Bloom filters
io.hfile.bloom.max.fold  --> *io.storefile.bloom.max.fold*
io.hfile.bloom.error.rate --> *io.storefile.bloom.error.rate*
io.storefile.bloom.block.size --> *default is 128*1024 = 131072*

These properties are probably not tuned usually, but should still be fixed in 
the doc.

  was:
In section 94.4. Bloom Filters:

 Since HBase 0.96, row-based Bloom filters are enabled by default. (HBASE-)  
--> in *HBASE-8450*

In section 94.4.3. Configuring Server-Wide Behavior of Bloom Filters: 

io.hfile.bloom.enabled  --> *io.storefile.bloom.enabled*  Master switch to 
enable Bloom filters
io.hfile.bloom.max.fold  --> *io.storefile.bloom.max.fold*
io.hfile.bloom.error.rate --> *io.storefile.bloom.error.rate*
io.storefile.bloom.block.size --> *default is 128*1024 = 131072*

These properties are probably not tuned usually, but should still be fixed in 
the doc.


> Correct Bloom filter documentation in the book
> --
>
> Key: HBASE-14898
> URL: https://issues.apache.org/jira/browse/HBASE-14898
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: yi liang
>Priority: Minor
> Attachments: bf.patch
>
>
> In section 96.4. Bloom Filters:
>  Since HBase 0.96, row-based Bloom filters are enabled by default. (HBASE-)  
> --> in *HBASE-8450*
> In section 94.4.3. Configuring Server-Wide Behavior of Bloom Filters: 
> io.hfile.bloom.enabled  --> *io.storefile.bloom.enabled*  Master switch to 
> enable Bloom filters
> io.hfile.bloom.max.fold  --> *io.storefile.bloom.max.fold*
> io.hfile.bloom.error.rate --> *io.storefile.bloom.error.rate*
> io.storefile.bloom.block.size --> *default is 128*1024 = 131072*
> These properties are probably not tuned usually, but should still be fixed in 
> the doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-04-14 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242433#comment-15242433
 ] 

Sean Busbey commented on HBASE-15638:
-

it'll be awkward but makes it clear in the source when we mean to refer to 
"our" version of protobuf vs some other project's version. It also keeps us 
from messing with any of our deployment dependnecies.

on the other hand, if all of the references for "our" protobuf are in the 
hbase-protocol module, then turning that into a shaded module with relocated 
protobufs in place is pretty straightforward and lets the source reference the 
original names.

> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
> Attachments: 15638v2.patch, as.far.as.server.patch
>
>
> Shade protobufs so we can move to a different version without breaking the 
> world. We want to get up on pb3 because it has unsafe methods that allow us 
> save on copies; it also has some means of dealing with BBs so we can pass it 
> offheap DBBs. We'll probably want to change PB3 to open it up some more too 
> so we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] 
> and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular 
> in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>   if (payload != null) {
> builder.setPayload(ByteString.copyFrom(payload));
>   }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as 
> pb. Test at least.
> Let me raise this one on the dev list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15650) Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-04-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242432#comment-15242432
 ] 

Anoop Sam John commented on HBASE-15650:


+1 Great...

{code}
 public void includeTimestamp(final Cell cell) {
92  // TODO: Why is this necessary? We already did this when we added 
the Cells to the memstore.
93  // Won't this run-through just do nothing except slow us down?
94  includeTimestamp(cell.getTimestamp());
{code}
I see this TODO added.. When Cell is added to memstore, we call 
Segment#updateMetaInfo(Cell toAdd, long s) which call this method on TRT 
passing the Cell..   So ya when we add cell to memstore, we are doing this  via 
this method.. I think no issue here and better remove this TODO also.

> Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile
> 
>
> Key: HBASE-15650
> URL: https://issues.apache.org/jira/browse/HBASE-15650
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 15650.branch-1.patch, 15650.patch, 15650.patch, 
> 15650v2.patch, 15650v3.patch, 15650v4.patch, 15650v5.patch, 
> Point-of-contention-on-random-read.png
>
>
> HBASE-12148 is about "Remove TimeRangeTracker as point of contention when 
> many threads writing a Store". It is also a point of contention when reading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15406) Split / merge switch left disabled after early termination of hbck

2016-04-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242430#comment-15242430
 ] 

stack commented on HBASE-15406:
---

bq.  is it OK for change admin api on branches which are not released public?

Yes as long as the method is new.

I'm +1 on this patch given the Admin change is on a method that has not been in 
a release yet.  Go for it [~chenheng]

> Split / merge switch left disabled after early termination of hbck
> --
>
> Key: HBASE-15406
> URL: https://issues.apache.org/jira/browse/HBASE-15406
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Heng Chen
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15406.patch, HBASE-15406.v1.patch, 
> HBASE-15406_v1.patch, HBASE-15406_v2.patch, test.patch, wip.patch
>
>
> This was what I did on cluster with 1.4.0-SNAPSHOT built Thursday:
> Run 'hbase hbck -disableSplitAndMerge' on gateway node of the cluster
> Terminate hbck early
> Enter hbase shell where I observed:
> {code}
> hbase(main):001:0> splitormerge_enabled 'SPLIT'
> false
> 0 row(s) in 0.3280 seconds
> hbase(main):002:0> splitormerge_enabled 'MERGE'
> false
> 0 row(s) in 0.0070 seconds
> {code}
> Expectation is that the split / merge switches should be restored to default 
> value after hbck exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15650) Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-04-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15650:
--
Attachment: 15650v5.patch

Address the great find by [~anoop.hbase]

> Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile
> 
>
> Key: HBASE-15650
> URL: https://issues.apache.org/jira/browse/HBASE-15650
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 15650.branch-1.patch, 15650.patch, 15650.patch, 
> 15650v2.patch, 15650v3.patch, 15650v4.patch, 15650v5.patch, 
> Point-of-contention-on-random-read.png
>
>
> HBASE-12148 is about "Remove TimeRangeTracker as point of contention when 
> many threads writing a Store". It is also a point of contention when reading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15406) Split / merge switch left disabled after early termination of hbck

2016-04-14 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242425#comment-15242425
 ] 

Heng Chen commented on HBASE-15406:
---

[~stack] Relates Admin api changed was introduced in HBASE-15128,  this api 
only exists on branch-1.3+,   is it OK for change admin api  on branches which 
are not released public?

> Split / merge switch left disabled after early termination of hbck
> --
>
> Key: HBASE-15406
> URL: https://issues.apache.org/jira/browse/HBASE-15406
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Heng Chen
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15406.patch, HBASE-15406.v1.patch, 
> HBASE-15406_v1.patch, HBASE-15406_v2.patch, test.patch, wip.patch
>
>
> This was what I did on cluster with 1.4.0-SNAPSHOT built Thursday:
> Run 'hbase hbck -disableSplitAndMerge' on gateway node of the cluster
> Terminate hbck early
> Enter hbase shell where I observed:
> {code}
> hbase(main):001:0> splitormerge_enabled 'SPLIT'
> false
> 0 row(s) in 0.3280 seconds
> hbase(main):002:0> splitormerge_enabled 'MERGE'
> false
> 0 row(s) in 0.0070 seconds
> {code}
> Expectation is that the split / merge switches should be restored to default 
> value after hbck exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15650) Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-04-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242426#comment-15242426
 ] 

stack commented on HBASE-15650:
---

Woah. Excellent. Copy/Paste but failed removing this very important bit.

> Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile
> 
>
> Key: HBASE-15650
> URL: https://issues.apache.org/jira/browse/HBASE-15650
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 15650.branch-1.patch, 15650.patch, 15650.patch, 
> 15650v2.patch, 15650v3.patch, 15650v4.patch, 
> Point-of-contention-on-random-read.png
>
>
> HBASE-12148 is about "Remove TimeRangeTracker as point of contention when 
> many threads writing a Store". It is also a point of contention when reading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-04-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242422#comment-15242422
 ] 

stack commented on HBASE-15638:
---

Thanks [~busbey]

That mean, you think this patche's approach where all modules explicitly 
reference the relocated pb is the way to go?

> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
> Attachments: 15638v2.patch, as.far.as.server.patch
>
>
> Shade protobufs so we can move to a different version without breaking the 
> world. We want to get up on pb3 because it has unsafe methods that allow us 
> save on copies; it also has some means of dealing with BBs so we can pass it 
> offheap DBBs. We'll probably want to change PB3 to open it up some more too 
> so we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] 
> and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular 
> in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>   if (payload != null) {
> builder.setPayload(ByteString.copyFrom(payload));
>   }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as 
> pb. Test at least.
> Let me raise this one on the dev list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-04-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242420#comment-15242420
 ] 

stack commented on HBASE-15638:
---

I don't think I can do this [~sergey.soldatov] For sure the later PB version of 
BS is different. I think there could be interesting issues passing a later BS 
to HDFS than it expects.

When I do top-level relocation, which jar ends up w/ the relocated pb classes? 
I can try it and find out for myself...

> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
> Attachments: 15638v2.patch, as.far.as.server.patch
>
>
> Shade protobufs so we can move to a different version without breaking the 
> world. We want to get up on pb3 because it has unsafe methods that allow us 
> save on copies; it also has some means of dealing with BBs so we can pass it 
> offheap DBBs. We'll probably want to change PB3 to open it up some more too 
> so we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] 
> and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular 
> in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>   if (payload != null) {
> builder.setPayload(ByteString.copyFrom(payload));
>   }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as 
> pb. Test at least.
> Let me raise this one on the dev list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15650) Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-04-14 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242418#comment-15242418
 ] 

Anoop Sam John commented on HBASE-15650:


bq.synchronized boolean includesTimeRange(final TimeRange tr) 
We need this to be synchronized ?  

> Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile
> 
>
> Key: HBASE-15650
> URL: https://issues.apache.org/jira/browse/HBASE-15650
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 15650.branch-1.patch, 15650.patch, 15650.patch, 
> 15650v2.patch, 15650v3.patch, 15650v4.patch, 
> Point-of-contention-on-random-read.png
>
>
> HBASE-12148 is about "Remove TimeRangeTracker as point of contention when 
> many threads writing a Store". It is also a point of contention when reading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15406) Split / merge switch left disabled after early termination of hbck

2016-04-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242415#comment-15242415
 ] 

stack commented on HBASE-15406:
---

Hey [~chenheng] Thanks for working on this one.

You get the bit that our Admin API is public and that you can't change it. You 
can add methods but not change existing ones?

I'm talking about the addition of the param here

   final boolean skipLock,

Do an override instead?

Otherwise patch looks good.



> Split / merge switch left disabled after early termination of hbck
> --
>
> Key: HBASE-15406
> URL: https://issues.apache.org/jira/browse/HBASE-15406
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Heng Chen
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15406.patch, HBASE-15406.v1.patch, 
> HBASE-15406_v1.patch, HBASE-15406_v2.patch, test.patch, wip.patch
>
>
> This was what I did on cluster with 1.4.0-SNAPSHOT built Thursday:
> Run 'hbase hbck -disableSplitAndMerge' on gateway node of the cluster
> Terminate hbck early
> Enter hbase shell where I observed:
> {code}
> hbase(main):001:0> splitormerge_enabled 'SPLIT'
> false
> 0 row(s) in 0.3280 seconds
> hbase(main):002:0> splitormerge_enabled 'MERGE'
> false
> 0 row(s) in 0.0070 seconds
> {code}
> Expectation is that the split / merge switches should be restored to default 
> value after hbck exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15650) Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-04-14 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242402#comment-15242402
 ] 

stack commented on HBASE-15650:
---

Is that a +1 on commit [~anoop.hbase]?

> Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile
> 
>
> Key: HBASE-15650
> URL: https://issues.apache.org/jira/browse/HBASE-15650
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 15650.branch-1.patch, 15650.patch, 15650.patch, 
> 15650v2.patch, 15650v3.patch, 15650v4.patch, 
> Point-of-contention-on-random-read.png
>
>
> HBASE-12148 is about "Remove TimeRangeTracker as point of contention when 
> many threads writing a Store". It is also a point of contention when reading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15650) Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-04-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15650:
--
Attachment: 15650v4.patch

> Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile
> 
>
> Key: HBASE-15650
> URL: https://issues.apache.org/jira/browse/HBASE-15650
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 15650.branch-1.patch, 15650.patch, 15650.patch, 
> 15650v2.patch, 15650v3.patch, 15650v4.patch, 
> Point-of-contention-on-random-read.png
>
>
> HBASE-12148 is about "Remove TimeRangeTracker as point of contention when 
> many threads writing a Store". It is also a point of contention when reading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15644) Add maven-scala-plugin for scaladoc

2016-04-14 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242395#comment-15242395
 ] 

Sean Busbey commented on HBASE-15644:
-

+1 I'll push this soon, unless someone else has any feedback.

/cc [~misty] since this alters the project's reporting section.

> Add maven-scala-plugin for scaladoc
> ---
>
> Key: HBASE-15644
> URL: https://issues.apache.org/jira/browse/HBASE-15644
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Appy
> Attachments: HBASE-15644.1.patch, bogus-scala-change.patch, 
> scala-tools.patch
>
>
> Added scala-tools.org to repository (as a side effect, all common artifacts 
> get downloaded twice now, once from apache repo and once from scala-tools)
> This fixes scala:doc precommit.
> The patch 'bogus-scala-change' adds a blank line to a scala file to trigger 
> scala:doc precommit. As expected, the target failed for master and passed for 
> the patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15406) Split / merge switch left disabled after early termination of hbck

2016-04-14 Thread Heng Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Heng Chen reassigned HBASE-15406:
-

Assignee: Heng Chen

> Split / merge switch left disabled after early termination of hbck
> --
>
> Key: HBASE-15406
> URL: https://issues.apache.org/jira/browse/HBASE-15406
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Heng Chen
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15406.patch, HBASE-15406.v1.patch, 
> HBASE-15406_v1.patch, HBASE-15406_v2.patch, test.patch, wip.patch
>
>
> This was what I did on cluster with 1.4.0-SNAPSHOT built Thursday:
> Run 'hbase hbck -disableSplitAndMerge' on gateway node of the cluster
> Terminate hbck early
> Enter hbase shell where I observed:
> {code}
> hbase(main):001:0> splitormerge_enabled 'SPLIT'
> false
> 0 row(s) in 0.3280 seconds
> hbase(main):002:0> splitormerge_enabled 'MERGE'
> false
> 0 row(s) in 0.0070 seconds
> {code}
> Expectation is that the split / merge switches should be restored to default 
> value after hbck exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15406) Split / merge switch left disabled after early termination of hbck

2016-04-14 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242390#comment-15242390
 ] 

Heng Chen commented on HBASE-15406:
---

If no other suggestions,  i will commit it today

> Split / merge switch left disabled after early termination of hbck
> --
>
> Key: HBASE-15406
> URL: https://issues.apache.org/jira/browse/HBASE-15406
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15406.patch, HBASE-15406.v1.patch, 
> HBASE-15406_v1.patch, HBASE-15406_v2.patch, test.patch, wip.patch
>
>
> This was what I did on cluster with 1.4.0-SNAPSHOT built Thursday:
> Run 'hbase hbck -disableSplitAndMerge' on gateway node of the cluster
> Terminate hbck early
> Enter hbase shell where I observed:
> {code}
> hbase(main):001:0> splitormerge_enabled 'SPLIT'
> false
> 0 row(s) in 0.3280 seconds
> hbase(main):002:0> splitormerge_enabled 'MERGE'
> false
> 0 row(s) in 0.0070 seconds
> {code}
> Expectation is that the split / merge switches should be restored to default 
> value after hbck exits.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15658) RegionServerCallable / RpcRetryingCaller clear meta cache on retries

2016-04-14 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling reassigned HBASE-15658:
-

Assignee: Gary Helmling

> RegionServerCallable / RpcRetryingCaller clear meta cache on retries
> 
>
> Key: HBASE-15658
> URL: https://issues.apache.org/jira/browse/HBASE-15658
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.2.1
>Reporter: Gary Helmling
>Assignee: Gary Helmling
>Priority: Critical
> Fix For: 1.3.0
>
>
> When RpcRetryingCaller.callWithRetries() attempts a retry, it calls 
> RetryingCallable.prepare(tries != 0).  For RegionServerCallable (and probably 
> others), this will wind up calling 
> RegionLocator.getRegionLocation(reload=true), which will drop the meta cache 
> for the given region and always go back to meta.
> This is kind of silly, since in the case of exceptions, we already call 
> RetryingCallable.throwable(), which goes to great pains to only refresh the 
> meta cache when necessary.  Since we are already doing this on failure, I 
> don't really understand why we are doing duplicate work to refresh the meta 
> cache on prepare() at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-04-14 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242278#comment-15242278
 ] 

Duo Zhang commented on HBASE-15638:
---

I mean we can use the shade plugin when building artifacts without changing the 
source code?

> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
> Attachments: 15638v2.patch, as.far.as.server.patch
>
>
> Shade protobufs so we can move to a different version without breaking the 
> world. We want to get up on pb3 because it has unsafe methods that allow us 
> save on copies; it also has some means of dealing with BBs so we can pass it 
> offheap DBBs. We'll probably want to change PB3 to open it up some more too 
> so we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] 
> and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular 
> in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>   if (payload != null) {
> builder.setPayload(ByteString.copyFrom(payload));
>   }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as 
> pb. Test at least.
> Let me raise this one on the dev list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15658) RegionServerCallable / RpcRetryingCaller clear meta cache on retries

2016-04-14 Thread Gary Helmling (JIRA)
Gary Helmling created HBASE-15658:
-

 Summary: RegionServerCallable / RpcRetryingCaller clear meta cache 
on retries
 Key: HBASE-15658
 URL: https://issues.apache.org/jira/browse/HBASE-15658
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 1.2.1
Reporter: Gary Helmling
Priority: Critical


When RpcRetryingCaller.callWithRetries() attempts a retry, it calls 
RetryingCallable.prepare(tries != 0).  For RegionServerCallable (and probably 
others), this will wind up calling 
RegionLocator.getRegionLocation(reload=true), which will drop the meta cache 
for the given region and always go back to meta.

This is kind of silly, since in the case of exceptions, we already call 
RetryingCallable.throwable(), which goes to great pains to only refresh the 
meta cache when necessary.  Since we are already doing this on failure, I don't 
really understand why we are doing duplicate work to refresh the meta cache on 
prepare() at all.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15622) Superusers does not consider the keytab credentials

2016-04-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242270#comment-15242270
 ] 

Hadoop QA commented on HBASE-15622:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
52s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 5m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 5m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
35m 33s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 135m 39s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 200m 22s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.master.procedure.TestMasterFailoverWithProcedures |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798829/HBASE-15622-v0.patch |
| JIRA Issue | HBASE-15622 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf910.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7efb9ed |
| Default Java | 1.7.0_79 |

[jira] [Commented] (HBASE-15650) Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-04-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242253#comment-15242253
 ] 

Hadoop QA commented on HBASE-15650:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
34s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
2s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 14s 
{color} | {color:red} hbase-common: patch generated 1 new + 8 unchanged - 1 
fixed = 9 total (was 9) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 8s 
{color} | {color:red} hbase-server: patch generated 1 new + 8 unchanged - 1 
fixed = 9 total (was 9) {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 50s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 46s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 36s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 51s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestStoreFile |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/127988

[jira] [Commented] (HBASE-15506) FSDataOutputStream.write() allocates new byte buffer on each operation

2016-04-14 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242227#comment-15242227
 ] 

Vladimir Rodionov commented on HBASE-15506:
---

I have observed 5GB/sec allocation rate per RS during AggregationClient run 
(surprisingly unoptimized implementation, btw). Minor GC was every 100ms. Guess 
what were other operation latencies during that time? 

> FSDataOutputStream.write() allocates new byte buffer on each operation
> --
>
> Key: HBASE-15506
> URL: https://issues.apache.org/jira/browse/HBASE-15506
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>
> Deep inside stack trace in DFSOutputStream.createPacket.
> This should be opened in HDFS. This JIRA is to track HDFS work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14898) Correct Bloom filter documentation in the book

2016-04-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242226#comment-15242226
 ] 

Hadoop QA commented on HBASE-14898:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
3s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 41s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 35s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 112m 19s 
{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 156m 23s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798836/bf.patch |
| JIRA Issue | HBASE-14898 |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7efb9ed |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1424/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1424/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Correct Bloom filter documentation in the book
> --
>
> Key: HBASE-14898
> URL: https://issues.apache.org/jira/browse/HBASE-14898
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: yi liang
>Priority: Minor
> Attachments: bf.patch
>
>
> In section 94.4. Bloom Filters:
>  Since HBase 0.96, row-based Bloom filters are enabled by default. (HBASE-)  
> --> in *HBASE-8450*
> In section 94.4.3. Configuring Server-Wide Behavior of Bloom Filters: 
> io.hfile.bloom.enabled  --> *io.storefile.bloom.enabled*  Master switch to 
> enable Bloom filters
> io.hfile.bloom.max.fold  --> *io.storefile.bloom.max.fold*
> io.hfile.bloom.error.rate --> *io.storefile.bloom.error.rate*
> io.storefile.bloom.block.size --> *default is 128*1024 = 131072*
> These properties are probably not tuned usually, but should still be fixed in 
> the doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15506) FSDataOutputStream.write() allocates new byte buffer on each operation

2016-04-14 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242225#comment-15242225
 ] 

Vladimir Rodionov commented on HBASE-15506:
---

{quote}
I am only saying that a statement like "heap allocations are bad" is simply not 
generally true.
{quote}

I can't concur here with you. Frequent allocations -> frequent minor GC 
collections -> frequent stw pauses - bad 99%. 

> FSDataOutputStream.write() allocates new byte buffer on each operation
> --
>
> Key: HBASE-15506
> URL: https://issues.apache.org/jira/browse/HBASE-15506
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>
> Deep inside stack trace in DFSOutputStream.createPacket.
> This should be opened in HDFS. This JIRA is to track HDFS work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15187) Integrate CSRF prevention filter to REST gateway

2016-04-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242204#comment-15242204
 ] 

Hadoop QA commented on HBASE-15187:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 41s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 39s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 8s 
{color} | {color:red} hbase-rest in master has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
42m 40s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 42s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 5s 
{color} | {color:green} hbase-rest in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
38s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 18s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache

[jira] [Commented] (HBASE-15654) Optimize client's MetaCache handling

2016-04-14 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242188#comment-15242188
 ] 

Mikhail Antonov commented on HBASE-15654:
-

[~vrodionov] yeah.. it's closely related, and while my goal in this jira is to 
reduce the request load on meta which swarm of client could cause I also want 
to make sure performance of MetaCache is intact, especially as I'm planning to 
change some locking around lookups.

Do you think we should move it as subtask here?

> Optimize client's MetaCache handling
> 
>
> Key: HBASE-15654
> URL: https://issues.apache.org/jira/browse/HBASE-15654
> Project: HBase
>  Issue Type: Umbrella
>  Components: Client
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 1.3.0
>
>
> This is an umbrella jira to track all individual issues, bugfixes and small 
> optimizations around MetaCache (region locations cache) in the client. 
> Motivation is that under the load one could see a spikes in the number of 
> requests going to meta - reaching tens of thousands requests per second.
> That covers issues when we clear entries from location cache unnecessary, as 
> well as when we do more lookups than necessary when entries are legitimately 
> evicted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15539) HBase Client region location is expensive

2016-04-14 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov reassigned HBASE-15539:
---

Assignee: Mikhail Antonov

> HBase Client region location is expensive 
> --
>
> Key: HBASE-15539
> URL: https://issues.apache.org/jira/browse/HBASE-15539
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Vladimir Rodionov
>Assignee: Mikhail Antonov
> Fix For: 2.0.0
>
>
> ConnectionImplementation.locateRegion and MetaCache.getTableLocations are hot 
> spots in a client.   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15650) Remove TimeRangeTracker as point of contention when many threads reading a StoreFile

2016-04-14 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-15650:
--
Attachment: 15650v3.patch

Fix tests.

Add a bunch of doc on how at write time, we pass the memstore TimeRangeTracker 
down to the Writer so it doesn't have to do the calculation (it only calculates 
when no memstore around, at Compaction time).

> Remove TimeRangeTracker as point of contention when many threads reading a 
> StoreFile
> 
>
> Key: HBASE-15650
> URL: https://issues.apache.org/jira/browse/HBASE-15650
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 15650.branch-1.patch, 15650.patch, 15650.patch, 
> 15650v2.patch, 15650v3.patch, Point-of-contention-on-random-read.png
>
>
> HBASE-12148 is about "Remove TimeRangeTracker as point of contention when 
> many threads writing a Store". It is also a point of contention when reading.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15405) Synchronize final results logging single thread in PE, fix wrong defaults in help message

2016-04-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242154#comment-15242154
 ] 

Hudson commented on HBASE-15405:


FAILURE: Integrated in HBase-1.4 #89 (See 
[https://builds.apache.org/job/HBase-1.4/89/])
HBASE-15405 Fix PE logging and wrong defaults in help message. (stack: rev 
d378975351fc093c5e31da6266dbd5bf56812914)
* hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/YammerHistogramUtils.java


> Synchronize final results logging single thread in PE, fix wrong defaults in 
> help message
> -
>
> Key: HBASE-15405
> URL: https://issues.apache.org/jira/browse/HBASE-15405
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.4.0
>
> Attachments: HBASE-15405-branch-1.patch, HBASE-15405-master-v2.patch, 
> HBASE-15405-master-v3.patch, HBASE-15405-master-v4 (1).patch, 
> HBASE-15405-master-v4 (1).patch, HBASE-15405-master-v4.patch, 
> HBASE-15405-master-v4.patch, HBASE-15405-master-v5.patch, 
> HBASE-15405-master-v6.patch, HBASE-15405-master.patch, 
> HBASE-15405-master.patch
>
>
> Corrects wrong default values for few options in the help message.
> Final stats from multiple clients are intermingled making it hard to 
> understand. Also the logged stats aren't very machine readable. It can be 
> helpful in a daily perf testing rig which scraps logs for results.
> Example of logs before the change.
> {noformat}
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: 0/1048570/1048576, 
> latency mean=953.98, min=359.00, max=324050.00, stdDev=851.82, 95th=1368.00, 
> 99th=1625.00
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: 0/1048570/1048576, 
> latency mean=953.92, min=356.00, max=323394.00, stdDev=817.55, 95th=1370.00, 
> 99th=1618.00
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: 0/1048570/1048576, 
> latency mean=953.98, min=367.00, max=322745.00, stdDev=840.43, 95th=1369.00, 
> 99th=1622.00
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Min  = 
> 375.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Min  = 
> 363.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Avg  = 
> 953.6624126434326
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Avg  = 
> 953.4124526977539
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest StdDev   = 
> 781.3929776087633
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest StdDev   = 
> 742.8027916717297
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 50th = 
> 894.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 50th = 
> 894.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 75th = 
> 1070.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 75th = 
> 1071.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 95th = 
> 1369.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 95th = 
> 1369.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 99th = 
> 1623.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 99th = 
> 1624.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Min  = 
> 372.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 99.9th   = 
> 301

[jira] [Commented] (HBASE-15622) Superusers does not consider the keytab credentials

2016-04-14 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242153#comment-15242153
 ] 

Gary Helmling commented on HBASE-15622:
---

Hmm, yeah doesn't look like any of our MiniKdc tests actually spin up a full 
mini-cluster.

+1 on the fix.

> Superusers does not consider the keytab credentials
> ---
>
> Key: HBASE-15622
> URL: https://issues.apache.org/jira/browse/HBASE-15622
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 0.98.16.1
>Reporter: Matteo Bertozzi
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.1.5, 1.2.2
>
> Attachments: HBASE-15622-v0.patch
>
>
> After HBASE-13755 the superuser we add by default (the process running hbase) 
> does not take in consideration the keytab credential.
> We have an env with the process user being hbase and the keytab being 
> hbasefoo.
> from Superusers TRACE I see, the hbase being picked up
> {noformat}
> TRACE Superusers: Current user name is hbase
> {noformat}
> from the RS audit I see the hbasefoo making requests
> {noformat}
> "allowed":true,"serviceName":"HBASE-1","username":"hbasefoo...
> {noformat}
> looking at the code in HRegionServer we do 
> {code}
> public HRegionServer(Configuration conf, CoordinatedStateManager csm)
>   throws IOException {
>...
> this.userProvider = UserProvider.instantiate(conf);
> Superusers.initialize(conf);
>..
>// login the server principal (if using secure Hadoop)
> login(userProvider, hostName);
>   ..
> {code}
> Before HBASE-13755 we were initializing the super user in the ACL 
> coprocessor, so after the login. but now we do that before the login.
> I'm not sure if we can just move the Superuser.initialize() after the login 
> [~mantonov]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15654) Optimize client's MetaCache handling

2016-04-14 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242148#comment-15242148
 ] 

Vladimir Rodionov commented on HBASE-15654:
---

Do you want to work on HBASE-15539, [~mantonov]? Its related.

> Optimize client's MetaCache handling
> 
>
> Key: HBASE-15654
> URL: https://issues.apache.org/jira/browse/HBASE-15654
> Project: HBase
>  Issue Type: Umbrella
>  Components: Client
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 1.3.0
>
>
> This is an umbrella jira to track all individual issues, bugfixes and small 
> optimizations around MetaCache (region locations cache) in the client. 
> Motivation is that under the load one could see a spikes in the number of 
> requests going to meta - reaching tens of thousands requests per second.
> That covers issues when we clear entries from location cache unnecessary, as 
> well as when we do more lookups than necessary when entries are legitimately 
> evicted.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15614) Report metrics from JvmPauseMonitor

2016-04-14 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242125#comment-15242125
 ] 

Nick Dimiduk commented on HBASE-15614:
--

bq. Actually I can take this if nobody else wants it.

Thanks [~apurtell]!

> Report metrics from JvmPauseMonitor
> ---
>
> Key: HBASE-15614
> URL: https://issues.apache.org/jira/browse/HBASE-15614
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics, regionserver
>Reporter: Nick Dimiduk
>Assignee: Andrew Purtell
>
> We have {{JvmPauseMonitor}} for detecting JVM pauses; pauses are logged at 
> WARN. Would also be good to expose this information on a dashboard via 
> metrics system -- make it easier to get this info off the host and into a 
> central location for the operator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15614) Report metrics from JvmPauseMonitor

2016-04-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reassigned HBASE-15614:
--

Assignee: Andrew Purtell

> Report metrics from JvmPauseMonitor
> ---
>
> Key: HBASE-15614
> URL: https://issues.apache.org/jira/browse/HBASE-15614
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics, regionserver
>Reporter: Nick Dimiduk
>Assignee: Andrew Purtell
>
> We have {{JvmPauseMonitor}} for detecting JVM pauses; pauses are logged at 
> WARN. Would also be good to expose this information on a dashboard via 
> metrics system -- make it easier to get this info off the host and into a 
> central location for the operator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-15614) Report metrics from JvmPauseMonitor

2016-04-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242116#comment-15242116
 ] 

Andrew Purtell edited comment on HBASE-15614 at 4/14/16 11:23 PM:
--

We see JvmPauseMonitor alerts quite frequently that aren't GC related, and end 
up tracking them separately. Agree exporting a separate metric is useful. (We 
collect and parse GC logs so can see when there's a GC pause beyond the 
threshold concurrent with a JvmPauseMonitor alert, or not.)


was (Author: apurtell):
We see JvmPauseMonitor alerts quite frequently that aren't GC related, and end 
up tracking them separately. Agree exporting a separate metric is useful. (We 
collect and parse GC logs separately so can see when there's a GC pause beyond 
the threshold concurrent with a JvmPauseMonitor alert, or not.)

> Report metrics from JvmPauseMonitor
> ---
>
> Key: HBASE-15614
> URL: https://issues.apache.org/jira/browse/HBASE-15614
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics, regionserver
>Reporter: Nick Dimiduk
>
> We have {{JvmPauseMonitor}} for detecting JVM pauses; pauses are logged at 
> WARN. Would also be good to expose this information on a dashboard via 
> metrics system -- make it easier to get this info off the host and into a 
> central location for the operator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15614) Report metrics from JvmPauseMonitor

2016-04-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242118#comment-15242118
 ] 

Andrew Purtell commented on HBASE-15614:


Actually I can take this if nobody else wants it.

> Report metrics from JvmPauseMonitor
> ---
>
> Key: HBASE-15614
> URL: https://issues.apache.org/jira/browse/HBASE-15614
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics, regionserver
>Reporter: Nick Dimiduk
>
> We have {{JvmPauseMonitor}} for detecting JVM pauses; pauses are logged at 
> WARN. Would also be good to expose this information on a dashboard via 
> metrics system -- make it easier to get this info off the host and into a 
> central location for the operator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15614) Report metrics from JvmPauseMonitor

2016-04-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15614?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242116#comment-15242116
 ] 

Andrew Purtell commented on HBASE-15614:


We see JvmPauseMonitor alerts quite frequently that aren't GC related, and end 
up tracking them separately. Agree exporting a separate metric is useful. (We 
collect and parse GC logs separately so can see when there's a GC pause beyond 
the threshold concurrent with a JvmPauseMonitor alert, or not.)

> Report metrics from JvmPauseMonitor
> ---
>
> Key: HBASE-15614
> URL: https://issues.apache.org/jira/browse/HBASE-15614
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics, regionserver
>Reporter: Nick Dimiduk
>
> We have {{JvmPauseMonitor}} for detecting JVM pauses; pauses are logged at 
> WARN. Would also be good to expose this information on a dashboard via 
> metrics system -- make it easier to get this info off the host and into a 
> central location for the operator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15622) Superusers does not consider the keytab credentials

2016-04-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242111#comment-15242111
 ] 

Andrew Purtell commented on HBASE-15622:


Wait, sorry, I had this sitting around for a while and assumed the comments 
above are old. I see they're posted today. Take it away [~ghelmling] ...

> Superusers does not consider the keytab credentials
> ---
>
> Key: HBASE-15622
> URL: https://issues.apache.org/jira/browse/HBASE-15622
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 0.98.16.1
>Reporter: Matteo Bertozzi
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.1.5, 1.2.2
>
> Attachments: HBASE-15622-v0.patch
>
>
> After HBASE-13755 the superuser we add by default (the process running hbase) 
> does not take in consideration the keytab credential.
> We have an env with the process user being hbase and the keytab being 
> hbasefoo.
> from Superusers TRACE I see, the hbase being picked up
> {noformat}
> TRACE Superusers: Current user name is hbase
> {noformat}
> from the RS audit I see the hbasefoo making requests
> {noformat}
> "allowed":true,"serviceName":"HBASE-1","username":"hbasefoo...
> {noformat}
> looking at the code in HRegionServer we do 
> {code}
> public HRegionServer(Configuration conf, CoordinatedStateManager csm)
>   throws IOException {
>...
> this.userProvider = UserProvider.instantiate(conf);
> Superusers.initialize(conf);
>..
>// login the server principal (if using secure Hadoop)
> login(userProvider, hostName);
>   ..
> {code}
> Before HBASE-13755 we were initializing the super user in the ACL 
> coprocessor, so after the login. but now we do that before the login.
> I'm not sure if we can just move the Superuser.initialize() after the login 
> [~mantonov]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15622) Superusers does not consider the keytab credentials

2016-04-14 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242108#comment-15242108
 ] 

Andrew Purtell commented on HBASE-15622:


Ok, going to commit this shortly then

> Superusers does not consider the keytab credentials
> ---
>
> Key: HBASE-15622
> URL: https://issues.apache.org/jira/browse/HBASE-15622
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 0.98.16.1
>Reporter: Matteo Bertozzi
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.1.5, 1.2.2
>
> Attachments: HBASE-15622-v0.patch
>
>
> After HBASE-13755 the superuser we add by default (the process running hbase) 
> does not take in consideration the keytab credential.
> We have an env with the process user being hbase and the keytab being 
> hbasefoo.
> from Superusers TRACE I see, the hbase being picked up
> {noformat}
> TRACE Superusers: Current user name is hbase
> {noformat}
> from the RS audit I see the hbasefoo making requests
> {noformat}
> "allowed":true,"serviceName":"HBASE-1","username":"hbasefoo...
> {noformat}
> looking at the code in HRegionServer we do 
> {code}
> public HRegionServer(Configuration conf, CoordinatedStateManager csm)
>   throws IOException {
>...
> this.userProvider = UserProvider.instantiate(conf);
> Superusers.initialize(conf);
>..
>// login the server principal (if using secure Hadoop)
> login(userProvider, hostName);
>   ..
> {code}
> Before HBASE-13755 we were initializing the super user in the ACL 
> coprocessor, so after the login. but now we do that before the login.
> I'm not sure if we can just move the Superuser.initialize() after the login 
> [~mantonov]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15580) Tag coprocessor limitedprivate scope to StoreFile.Reader

2016-04-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-15580:
---
Status: Open  (was: Patch Available)

Sorry I couldn't get to this earlier.

The trunk patch will need updating because StoreFile.Reader has been refactored 
to StoreFileReader and StoreFile.Writer has been refactored to StoreFileWriter. 
Both StoreFileReader and StoreFileWriter are tagged IA.Private. 

I assume we can also commit an updated trunk patch as along as there's no 
objection.

Will commit everywhere when everything is all set to go.

> Tag coprocessor limitedprivate scope to StoreFile.Reader
> 
>
> Key: HBASE-15580
> URL: https://issues.apache.org/jira/browse/HBASE-15580
> Project: HBase
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 2.0.0, 1.0.4, 0.98.19, 1.1.5, 1.2.2
>
> Attachments: HBASE-15580.patch, HBASE-15580_branch-1.0.patch
>
>
> For phoenix local indexing we need to have custom storefile reader 
> constructor(IndexHalfStoreFileReader) to distinguish from other storefile 
> readers. So wanted to mark StoreFile.Reader scope as 
> InterfaceAudience.LimitedPrivate("Coprocessor")



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-15644) disable precommit scaladoc test

2016-04-14 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy reassigned HBASE-15644:


Assignee: Appy  (was: Sean Busbey)

> disable precommit scaladoc test
> ---
>
> Key: HBASE-15644
> URL: https://issues.apache.org/jira/browse/HBASE-15644
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Appy
> Attachments: HBASE-15644.1.patch, bogus-scala-change.patch, 
> scala-tools.patch
>
>
> we don't have the necessary maven modules handy to build scaladoc. afaik we 
> haven't been relying on it to date (though maybe we should). For now, just 
> disable the test so that precommit won't spend time on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15644) Add maven-scala-plugin for scaladoc

2016-04-14 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15644:
-
Description: 
Added scala-tools.org to repository (as a side effect, all common artifacts get 
downloaded twice now, once from apache repo and once from scala-tools)
This fixes scala:doc precommit.
The patch 'bogus-scala-change' adds a blank line to a scala file to trigger 
scala:doc precommit. As expected, the target failed for master and passed for 
the patch.

  was:Added scala-tools to repo


> Add maven-scala-plugin for scaladoc
> ---
>
> Key: HBASE-15644
> URL: https://issues.apache.org/jira/browse/HBASE-15644
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Appy
> Attachments: HBASE-15644.1.patch, bogus-scala-change.patch, 
> scala-tools.patch
>
>
> Added scala-tools.org to repository (as a side effect, all common artifacts 
> get downloaded twice now, once from apache repo and once from scala-tools)
> This fixes scala:doc precommit.
> The patch 'bogus-scala-change' adds a blank line to a scala file to trigger 
> scala:doc precommit. As expected, the target failed for master and passed for 
> the patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15644) Add maven-scala-plugin for scaladoc

2016-04-14 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15644:
-
Description: Added scala-tools to repo  (was: we don't have the necessary 
maven modules handy to build scaladoc. afaik we haven't been relying on it to 
date (though maybe we should). For now, just disable the test so that precommit 
won't spend time on it.)

> Add maven-scala-plugin for scaladoc
> ---
>
> Key: HBASE-15644
> URL: https://issues.apache.org/jira/browse/HBASE-15644
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Appy
> Attachments: HBASE-15644.1.patch, bogus-scala-change.patch, 
> scala-tools.patch
>
>
> Added scala-tools to repo



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15644) Add maven-scala-plugin for scaladoc

2016-04-14 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15644:
-
Summary: Add maven-scala-plugin for scaladoc  (was: Fix scala pre-commit )

> Add maven-scala-plugin for scaladoc
> ---
>
> Key: HBASE-15644
> URL: https://issues.apache.org/jira/browse/HBASE-15644
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Appy
> Attachments: HBASE-15644.1.patch, bogus-scala-change.patch, 
> scala-tools.patch
>
>
> we don't have the necessary maven modules handy to build scaladoc. afaik we 
> haven't been relying on it to date (though maybe we should). For now, just 
> disable the test so that precommit won't spend time on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15644) Fix scala pre-commit

2016-04-14 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15644?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-15644:
-
Summary: Fix scala pre-commit   (was: disable precommit scaladoc test)

> Fix scala pre-commit 
> -
>
> Key: HBASE-15644
> URL: https://issues.apache.org/jira/browse/HBASE-15644
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Appy
> Attachments: HBASE-15644.1.patch, bogus-scala-change.patch, 
> scala-tools.patch
>
>
> we don't have the necessary maven modules handy to build scaladoc. afaik we 
> haven't been relying on it to date (though maybe we should). For now, just 
> disable the test so that precommit won't spend time on it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15187) Integrate CSRF prevention filter to REST gateway

2016-04-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15187:
---
Attachment: HBASE-15187.v14.patch

> Integrate CSRF prevention filter to REST gateway
> 
>
> Key: HBASE-15187
> URL: https://issues.apache.org/jira/browse/HBASE-15187
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: rest
> Fix For: 2.0.0
>
> Attachments: HBASE-15187-branch-1.v13.patch, HBASE-15187.v1.patch, 
> HBASE-15187.v10.patch, HBASE-15187.v10.patch, HBASE-15187.v11.patch, 
> HBASE-15187.v12.patch, HBASE-15187.v13.patch, HBASE-15187.v14.patch, 
> HBASE-15187.v2.patch, HBASE-15187.v3.patch, HBASE-15187.v4.patch, 
> HBASE-15187.v5.patch, HBASE-15187.v6.patch, HBASE-15187.v7.patch, 
> HBASE-15187.v8.patch, HBASE-15187.v9.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.
> This issue tracks the integration of that filter into HBase REST gateway.
> From REST section of refguide:
> To delete a table, use a DELETE request with the /schema endpoint:
> http://example.com:8000/schema
> Suppose an attacker hosts a malicious web form on a domain under his control. 
> The form uses the DELETE action targeting a REST URL. Through social 
> engineering, the attacker tricks an authenticated user into accessing the 
> form and submitting it.
> The browser sends the HTTP DELETE request to the REST gateway.
> At REST gateway, the call is executed and user table is dropped



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15187) Integrate CSRF prevention filter to REST gateway

2016-04-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15187:
---
Release Note: 
Protection against CSRF attack can be turned on with config parameter, 
hbase.rest.csrf.enabled - default value is false.

The custom header to be sent can be changed via config parameter, 
hbase.rest.csrf.custom.header whose default value is "X-XSRF-HEADER".

Config parameter, hbase.rest.csrf.methods.to.ignore , controls which HTTP 
methods are not associated with customer header check.

Config parameter, hbase.rest-csrf.browser-useragents-regex , is a 
comma-separated list of regular expressions used to match against an HTTP 
request's User-Agent header when protection against cross-site request forgery 
(CSRF) is enabled for REST server by setting hbase.rest.csrf.enabled to true.

The implementation came from 
hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/http/RestCsrfPreventionFilter.java

We should periodically update the RestCsrfPreventionFilter.java in hbase 
codebase to include fixes to the hadoop implementation.

  was:
Protection against CSRF attack can be turned on with config parameter, 
hbase.rest.csrf.enabled - default value is false.

The custom header to be sent can be changed via config parameter, 
hbase.rest.csrf.custom.header whose default value is "X-XSRF-HEADER".

Config parameter, hbase.rest.csrf.methods.to.ignore , controls which HTTP 
methods are not associated with customer header check.

Config parameter, hbase.rest-csrf.browser-useragents-regex , is a 
comma-separated list of regular expressions used to match against an HTTP 
request's User-Agent header when protection against cross-site request forgery 
(CSRF) is enabled for REST server by setting hbase.rest.csrf.enabled to true.


> Integrate CSRF prevention filter to REST gateway
> 
>
> Key: HBASE-15187
> URL: https://issues.apache.org/jira/browse/HBASE-15187
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: rest
> Fix For: 2.0.0
>
> Attachments: HBASE-15187-branch-1.v13.patch, HBASE-15187.v1.patch, 
> HBASE-15187.v10.patch, HBASE-15187.v10.patch, HBASE-15187.v11.patch, 
> HBASE-15187.v12.patch, HBASE-15187.v13.patch, HBASE-15187.v2.patch, 
> HBASE-15187.v3.patch, HBASE-15187.v4.patch, HBASE-15187.v5.patch, 
> HBASE-15187.v6.patch, HBASE-15187.v7.patch, HBASE-15187.v8.patch, 
> HBASE-15187.v9.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.
> This issue tracks the integration of that filter into HBase REST gateway.
> From REST section of refguide:
> To delete a table, use a DELETE request with the /schema endpoint:
> http://example.com:8000/schema
> Suppose an attacker hosts a malicious web form on a domain under his control. 
> The form uses the DELETE action targeting a REST URL. Through social 
> engineering, the attacker tricks an authenticated user into accessing the 
> form and submitting it.
> The browser sends the HTTP DELETE request to the REST gateway.
> At REST gateway, the call is executed and user table is dropped



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15622) Superusers does not consider the keytab credentials

2016-04-14 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242066#comment-15242066
 ] 

Matteo Bertozzi commented on HBASE-15622:
-

Yeah, I asked [~appy] to help out to write a unit-test with the MiniKDC and 
startup the cluster. I was not able to make it work, so I tested that on a real 
cluster.

> Superusers does not consider the keytab credentials
> ---
>
> Key: HBASE-15622
> URL: https://issues.apache.org/jira/browse/HBASE-15622
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 0.98.16.1
>Reporter: Matteo Bertozzi
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.1.5, 1.2.2
>
> Attachments: HBASE-15622-v0.patch
>
>
> After HBASE-13755 the superuser we add by default (the process running hbase) 
> does not take in consideration the keytab credential.
> We have an env with the process user being hbase and the keytab being 
> hbasefoo.
> from Superusers TRACE I see, the hbase being picked up
> {noformat}
> TRACE Superusers: Current user name is hbase
> {noformat}
> from the RS audit I see the hbasefoo making requests
> {noformat}
> "allowed":true,"serviceName":"HBASE-1","username":"hbasefoo...
> {noformat}
> looking at the code in HRegionServer we do 
> {code}
> public HRegionServer(Configuration conf, CoordinatedStateManager csm)
>   throws IOException {
>...
> this.userProvider = UserProvider.instantiate(conf);
> Superusers.initialize(conf);
>..
>// login the server principal (if using secure Hadoop)
> login(userProvider, hostName);
>   ..
> {code}
> Before HBASE-13755 we were initializing the super user in the ACL 
> coprocessor, so after the login. but now we do that before the login.
> I'm not sure if we can just move the Superuser.initialize() after the login 
> [~mantonov]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15187) Integrate CSRF prevention filter to REST gateway

2016-04-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242067#comment-15242067
 ] 

Ted Yu commented on HBASE-15187:


The only change in RestCsrfPreventionFilter.java between the one in patch v13 
and the one in hadoop is:
{code}
65c65
<   static final String HEADER_DEFAULT = "X-XSRF-HEADER";
---
>   public static final String HEADER_DEFAULT = "X-XSRF-HEADER";
{code}
I will include the above in the next patch.

Will add "hbase.rest.csrf.enabled" to hbase-default.xml

> Integrate CSRF prevention filter to REST gateway
> 
>
> Key: HBASE-15187
> URL: https://issues.apache.org/jira/browse/HBASE-15187
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: rest
> Fix For: 2.0.0
>
> Attachments: HBASE-15187-branch-1.v13.patch, HBASE-15187.v1.patch, 
> HBASE-15187.v10.patch, HBASE-15187.v10.patch, HBASE-15187.v11.patch, 
> HBASE-15187.v12.patch, HBASE-15187.v13.patch, HBASE-15187.v2.patch, 
> HBASE-15187.v3.patch, HBASE-15187.v4.patch, HBASE-15187.v5.patch, 
> HBASE-15187.v6.patch, HBASE-15187.v7.patch, HBASE-15187.v8.patch, 
> HBASE-15187.v9.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.
> This issue tracks the integration of that filter into HBase REST gateway.
> From REST section of refguide:
> To delete a table, use a DELETE request with the /schema endpoint:
> http://example.com:8000/schema
> Suppose an attacker hosts a malicious web form on a domain under his control. 
> The form uses the DELETE action targeting a REST URL. Through social 
> engineering, the attacker tricks an authenticated user into accessing the 
> form and submitting it.
> The browser sends the HTTP DELETE request to the REST gateway.
> At REST gateway, the call is executed and user table is dropped



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15187) Integrate CSRF prevention filter to REST gateway

2016-04-14 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242035#comment-15242035
 ] 

Jerry He commented on HBASE-15187:
--

Hi, [~tedyu]

You put the property hbase.rest-csrf.browser-useragents-regex in the 
hbase-default.xml.  But not any of the other new configuration properties, 
including the one that enables this feature.
We probably should keep that property out of hbase-default.xml as well?

Also there is a risk that we will be out of sync with the Hadoop CSRF filter, 
e.g. they fix a bug or make an improvement (it already happened in between your 
patches).  Can you add a note or a pointer in the Release Note so that people 
are aware and are clear?

+1 after the above.

> Integrate CSRF prevention filter to REST gateway
> 
>
> Key: HBASE-15187
> URL: https://issues.apache.org/jira/browse/HBASE-15187
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>  Labels: rest
> Fix For: 2.0.0
>
> Attachments: HBASE-15187-branch-1.v13.patch, HBASE-15187.v1.patch, 
> HBASE-15187.v10.patch, HBASE-15187.v10.patch, HBASE-15187.v11.patch, 
> HBASE-15187.v12.patch, HBASE-15187.v13.patch, HBASE-15187.v2.patch, 
> HBASE-15187.v3.patch, HBASE-15187.v4.patch, HBASE-15187.v5.patch, 
> HBASE-15187.v6.patch, HBASE-15187.v7.patch, HBASE-15187.v8.patch, 
> HBASE-15187.v9.patch
>
>
> HADOOP-12691 introduced a filter in Hadoop Common to help REST APIs guard 
> against cross-site request forgery attacks.
> This issue tracks the integration of that filter into HBase REST gateway.
> From REST section of refguide:
> To delete a table, use a DELETE request with the /schema endpoint:
> http://example.com:8000/schema
> Suppose an attacker hosts a malicious web form on a domain under his control. 
> The form uses the DELETE action targeting a REST URL. Through social 
> engineering, the attacker tricks an authenticated user into accessing the 
> form and submitting it.
> The browser sends the HTTP DELETE request to the REST gateway.
> At REST gateway, the call is executed and user table is dropped



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15657) Failed snapshot verification may not be detected by TakeSnapshotHandler

2016-04-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242008#comment-15242008
 ] 

Hadoop QA commented on HBASE-15657:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 53s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 26s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 145m 0s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.security.access.TestNamespaceCommands |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798804/15657.v1.patch |
| JIRA Issue | HBASE-15657 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux pietas.apache.org 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT 
Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7efb9ed |
| Default Java | 1.7.0_79 |
| Multi-JD

[jira] [Commented] (HBASE-15622) Superusers does not consider the keytab credentials

2016-04-14 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15242007#comment-15242007
 ] 

Gary Helmling commented on HBASE-15622:
---

+1

I don't see any direct test of Superusers in the code.  Might be worth adding a 
simple test for it -- start up RS, ensure current user/keytab user is a 
superuser -- unless something is already there.

> Superusers does not consider the keytab credentials
> ---
>
> Key: HBASE-15622
> URL: https://issues.apache.org/jira/browse/HBASE-15622
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 0.98.16.1
>Reporter: Matteo Bertozzi
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.1.5, 1.2.2
>
> Attachments: HBASE-15622-v0.patch
>
>
> After HBASE-13755 the superuser we add by default (the process running hbase) 
> does not take in consideration the keytab credential.
> We have an env with the process user being hbase and the keytab being 
> hbasefoo.
> from Superusers TRACE I see, the hbase being picked up
> {noformat}
> TRACE Superusers: Current user name is hbase
> {noformat}
> from the RS audit I see the hbasefoo making requests
> {noformat}
> "allowed":true,"serviceName":"HBASE-1","username":"hbasefoo...
> {noformat}
> looking at the code in HRegionServer we do 
> {code}
> public HRegionServer(Configuration conf, CoordinatedStateManager csm)
>   throws IOException {
>...
> this.userProvider = UserProvider.instantiate(conf);
> Superusers.initialize(conf);
>..
>// login the server principal (if using secure Hadoop)
> login(userProvider, hostName);
>   ..
> {code}
> Before HBASE-13755 we were initializing the super user in the ACL 
> coprocessor, so after the login. but now we do that before the login.
> I'm not sure if we can just move the Superuser.initialize() after the login 
> [~mantonov]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14898) Correct Bloom filter documentation in the book

2016-04-14 Thread yi liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yi liang updated HBASE-14898:
-
Status: Patch Available  (was: Open)

> Correct Bloom filter documentation in the book
> --
>
> Key: HBASE-14898
> URL: https://issues.apache.org/jira/browse/HBASE-14898
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: yi liang
>Priority: Minor
> Attachments: bf.patch
>
>
> In section 94.4. Bloom Filters:
>  Since HBase 0.96, row-based Bloom filters are enabled by default. (HBASE-)  
> --> in *HBASE-8450*
> In section 94.4.3. Configuring Server-Wide Behavior of Bloom Filters: 
> io.hfile.bloom.enabled  --> *io.storefile.bloom.enabled*  Master switch to 
> enable Bloom filters
> io.hfile.bloom.max.fold  --> *io.storefile.bloom.max.fold*
> io.hfile.bloom.error.rate --> *io.storefile.bloom.error.rate*
> io.storefile.bloom.block.size --> *default is 128*1024 = 131072*
> These properties are probably not tuned usually, but should still be fixed in 
> the doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14898) Correct Bloom filter documentation in the book

2016-04-14 Thread yi liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yi liang updated HBASE-14898:
-
Attachment: bf.patch

> Correct Bloom filter documentation in the book
> --
>
> Key: HBASE-14898
> URL: https://issues.apache.org/jira/browse/HBASE-14898
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: yi liang
>Priority: Minor
> Attachments: bf.patch
>
>
> In section 94.4. Bloom Filters:
>  Since HBase 0.96, row-based Bloom filters are enabled by default. (HBASE-)  
> --> in *HBASE-8450*
> In section 94.4.3. Configuring Server-Wide Behavior of Bloom Filters: 
> io.hfile.bloom.enabled  --> *io.storefile.bloom.enabled*  Master switch to 
> enable Bloom filters
> io.hfile.bloom.max.fold  --> *io.storefile.bloom.max.fold*
> io.hfile.bloom.error.rate --> *io.storefile.bloom.error.rate*
> io.storefile.bloom.block.size --> *default is 128*1024 = 131072*
> These properties are probably not tuned usually, but should still be fixed in 
> the doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15296) Break out writer and reader from StoreFile

2016-04-14 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15241977#comment-15241977
 ] 

Appy commented on HBASE-15296:
--

Hmm, so if it's breaking RS observer interface, should we still backport this 
to branch-1? It would make future backports easy.


> Break out writer and reader from StoreFile
> --
>
> Key: HBASE-15296
> URL: https://issues.apache.org/jira/browse/HBASE-15296
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-15296-branch-1.1.patch, 
> HBASE-15296-branch-1.2.patch, HBASE-15296-branch-1.patch, 
> HBASE-15296-master-v2.patch, HBASE-15296-master-v3.patch, 
> HBASE-15296-master-v4.patch, HBASE-15296-master-v5.patch, 
> HBASE-15296-master.patch
>
>
> StoreFile.java is trending to become a monolithic class, it's ~1800 lines. 
> Would it make sense to break out reader and writer (~500 lines each) into 
> separate files.
> We are doing so many different things in a single class: comparators, reader, 
> writer, other stuff; and it hurts readability a lot, to the point that just 
> reading through a piece of code require scrolling up and down to see which 
> level (reader/writer/base class level) it belongs to. These small-small 
> things really don't help while trying to understanding the code. There are 
> good reasons we don't do these often (affects existing patches, needs to be 
> done for all branches, etc). But this and a few other classes can really use 
> a single iteration of refactoring to make things a lot better.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15622) Superusers does not consider the keytab credentials

2016-04-14 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15622:

Status: Patch Available  (was: Open)

> Superusers does not consider the keytab credentials
> ---
>
> Key: HBASE-15622
> URL: https://issues.apache.org/jira/browse/HBASE-15622
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.98.16.1, 1.1.4, 1.2.0, 2.0.0, 1.3.0
>Reporter: Matteo Bertozzi
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.1.5, 1.2.2
>
> Attachments: HBASE-15622-v0.patch
>
>
> After HBASE-13755 the superuser we add by default (the process running hbase) 
> does not take in consideration the keytab credential.
> We have an env with the process user being hbase and the keytab being 
> hbasefoo.
> from Superusers TRACE I see, the hbase being picked up
> {noformat}
> TRACE Superusers: Current user name is hbase
> {noformat}
> from the RS audit I see the hbasefoo making requests
> {noformat}
> "allowed":true,"serviceName":"HBASE-1","username":"hbasefoo...
> {noformat}
> looking at the code in HRegionServer we do 
> {code}
> public HRegionServer(Configuration conf, CoordinatedStateManager csm)
>   throws IOException {
>...
> this.userProvider = UserProvider.instantiate(conf);
> Superusers.initialize(conf);
>..
>// login the server principal (if using secure Hadoop)
> login(userProvider, hostName);
>   ..
> {code}
> Before HBASE-13755 we were initializing the super user in the ACL 
> coprocessor, so after the login. but now we do that before the login.
> I'm not sure if we can just move the Superuser.initialize() after the login 
> [~mantonov]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15622) Superusers does not consider the keytab credentials

2016-04-14 Thread Matteo Bertozzi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matteo Bertozzi updated HBASE-15622:

Attachment: HBASE-15622-v0.patch

Attached v0, which simply moves SuperUsers.initialize() down.
I have tried that on the cluster with the setup described above and seems to 
work. without any other consequences.


> Superusers does not consider the keytab credentials
> ---
>
> Key: HBASE-15622
> URL: https://issues.apache.org/jira/browse/HBASE-15622
> Project: HBase
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.0, 1.2.0, 1.3.0, 1.1.4, 0.98.16.1
>Reporter: Matteo Bertozzi
>Priority: Critical
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.1.5, 1.2.2
>
> Attachments: HBASE-15622-v0.patch
>
>
> After HBASE-13755 the superuser we add by default (the process running hbase) 
> does not take in consideration the keytab credential.
> We have an env with the process user being hbase and the keytab being 
> hbasefoo.
> from Superusers TRACE I see, the hbase being picked up
> {noformat}
> TRACE Superusers: Current user name is hbase
> {noformat}
> from the RS audit I see the hbasefoo making requests
> {noformat}
> "allowed":true,"serviceName":"HBASE-1","username":"hbasefoo...
> {noformat}
> looking at the code in HRegionServer we do 
> {code}
> public HRegionServer(Configuration conf, CoordinatedStateManager csm)
>   throws IOException {
>...
> this.userProvider = UserProvider.instantiate(conf);
> Superusers.initialize(conf);
>..
>// login the server principal (if using secure Hadoop)
> login(userProvider, hostName);
>   ..
> {code}
> Before HBASE-13755 we were initializing the super user in the ACL 
> coprocessor, so after the login. but now we do that before the login.
> I'm not sure if we can just move the Superuser.initialize() after the login 
> [~mantonov]?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14898) Correct Bloom filter documentation in the book

2016-04-14 Thread yi liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

yi liang updated HBASE-14898:
-
Assignee: yi liang

> Correct Bloom filter documentation in the book
> --
>
> Key: HBASE-14898
> URL: https://issues.apache.org/jira/browse/HBASE-14898
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: yi liang
>Priority: Minor
>
> In section 94.4. Bloom Filters:
>  Since HBase 0.96, row-based Bloom filters are enabled by default. (HBASE-)  
> --> in *HBASE-8450*
> In section 94.4.3. Configuring Server-Wide Behavior of Bloom Filters: 
> io.hfile.bloom.enabled  --> *io.storefile.bloom.enabled*  Master switch to 
> enable Bloom filters
> io.hfile.bloom.max.fold  --> *io.storefile.bloom.max.fold*
> io.hfile.bloom.error.rate --> *io.storefile.bloom.error.rate*
> io.storefile.bloom.block.size --> *default is 128*1024 = 131072*
> These properties are probably not tuned usually, but should still be fixed in 
> the doc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15405) Synchronize final results logging single thread in PE, fix wrong defaults in help message

2016-04-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-15405:
---
Fix Version/s: 0.98.19

> Synchronize final results logging single thread in PE, fix wrong defaults in 
> help message
> -
>
> Key: HBASE-15405
> URL: https://issues.apache.org/jira/browse/HBASE-15405
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.4.0
>
> Attachments: HBASE-15405-branch-1.patch, HBASE-15405-master-v2.patch, 
> HBASE-15405-master-v3.patch, HBASE-15405-master-v4 (1).patch, 
> HBASE-15405-master-v4 (1).patch, HBASE-15405-master-v4.patch, 
> HBASE-15405-master-v4.patch, HBASE-15405-master-v5.patch, 
> HBASE-15405-master-v6.patch, HBASE-15405-master.patch, 
> HBASE-15405-master.patch
>
>
> Corrects wrong default values for few options in the help message.
> Final stats from multiple clients are intermingled making it hard to 
> understand. Also the logged stats aren't very machine readable. It can be 
> helpful in a daily perf testing rig which scraps logs for results.
> Example of logs before the change.
> {noformat}
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: 0/1048570/1048576, 
> latency mean=953.98, min=359.00, max=324050.00, stdDev=851.82, 95th=1368.00, 
> 99th=1625.00
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: 0/1048570/1048576, 
> latency mean=953.92, min=356.00, max=323394.00, stdDev=817.55, 95th=1370.00, 
> 99th=1618.00
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: 0/1048570/1048576, 
> latency mean=953.98, min=367.00, max=322745.00, stdDev=840.43, 95th=1369.00, 
> 99th=1622.00
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Min  = 
> 375.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Min  = 
> 363.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Avg  = 
> 953.6624126434326
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Avg  = 
> 953.4124526977539
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest StdDev   = 
> 781.3929776087633
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest StdDev   = 
> 742.8027916717297
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 50th = 
> 894.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 50th = 
> 894.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 75th = 
> 1070.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 75th = 
> 1071.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 95th = 
> 1369.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 95th = 
> 1369.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 99th = 
> 1623.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 99th = 
> 1624.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Min  = 
> 372.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 99.9th   = 
> 3013.998000214
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Avg  = 
> 953.2451229095459
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 99.9th   = 
> 3043.998000214
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest StdDev   = 
> 725.4744472152282
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 99.9

[jira] [Updated] (HBASE-15637) TSHA Thrift-2 server should allow limiting call queue size

2016-04-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-15637:
---
Fix Version/s: 0.98.19
   2.0.0

> TSHA Thrift-2 server should allow limiting call queue size
> --
>
> Key: HBASE-15637
> URL: https://issues.apache.org/jira/browse/HBASE-15637
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 2.0.0, 1.3.0, 0.98.19
>
> Attachments: HBASE-15637-branch-1.3.v1.patch, 
> HBASE-15637-branch-1.3.v2.patch, HBASE-15637-v2.patch
>
>
> Right now seems like thrift-2 hsha server always create unbounded queue, 
> which could lead to OOM)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13700) Allow Thrift2 HSHA server to have configurable threads

2016-04-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13700:
---
Fix Version/s: 0.98.19

> Allow Thrift2 HSHA server to have configurable threads
> --
>
> Key: HBASE-13700
> URL: https://issues.apache.org/jira/browse/HBASE-13700
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.2.0, 0.98.19
>
> Attachments: HBASE-13700-v1.patch, HBASE-13700-v2.patch, 
> HBASE-13700.patch
>
>
> The half sync half async server by default starts 5 worker threads. For busy 
> servers that might not be enough. That should be configurable.
> For the threadpool there should be a way to set the max number of threads so 
> that creating threads doesn't run away. That should be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15569) Make Bytes.toStringBinary faster

2016-04-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-15569:
---
Fix Version/s: 0.98.19

> Make Bytes.toStringBinary faster
> 
>
> Key: HBASE-15569
> URL: https://issues.apache.org/jira/browse/HBASE-15569
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Reporter: Junegunn Choi
>Assignee: Junegunn Choi
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.4.0, 1.2.2
>
> Attachments: HBASE-15569.patch
>
>
> Bytes.toStringBinary is quite expensive due to its use of {{String.format}}. 
> It seems to me that {{String.format}} is overkill for the purpose and I could 
> actually make the function up to 45-times faster by replacing the part with a 
> simpler hand-crafted code.
> This is probably a non-issue for HBase server as the function is not used in 
> performance-sensitive contexts but I figured it wouldn't hurt to make it 
> faster as it's widely used in builtin tools - Shell, {{HFilePrettyPrinter}} 
> with {{-p}} option, etc. - and it can be used in clients.
> h4. Background:
> We have [an HBase monitoring 
> tool|https://github.com/kakao/hbase-region-inspector] that periodically 
> collects the information of the regions and it calls {{Bytes.toStringBinary}} 
> during the process to make some information suitable for display. Profiling 
> revealed that a large portion of the processing time was spent in 
> {{String.format}}.
> h4. Micro-benchmark:
> {code}
> byte[] bytes = new byte[256];
> for (int i = 0; i < bytes.length; ++i) {
>   // Mixture of printable and non-printable characters.
>   // Maximal performance gain (45x) is observed when the array is solely
>   // composed of non-printable characters.
>   bytes[i] = (byte) i;
> }
> long started = System.nanoTime();
> for (int i = 0; i < 100; ++i) {
>   Bytes.toStringBinary(bytes);
> }
> System.out.println(TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - 
> started));
> {code}
> - Without the patch: 134176 ms
> - With the patch: 3890 ms
> I made sure that the new version returns the same value as before and 
> simplified the check for non-printable characters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14983) Create metrics for per block type hit/miss ratios

2016-04-14 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14983:
---
Fix Version/s: 0.98.19

> Create metrics for per block type hit/miss ratios
> -
>
> Key: HBASE-14983
> URL: https://issues.apache.org/jira/browse/HBASE-14983
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.4.0
>
> Attachments: HBASE-14983-branch-1.patch, HBASE-14983-v1.patch, 
> HBASE-14983-v10.patch, HBASE-14983-v2.patch, HBASE-14983-v3.patch, 
> HBASE-14983-v4.patch, HBASE-14983-v5.patch, HBASE-14983-v6.patch, 
> HBASE-14983-v7.patch, HBASE-14983-v8.patch, HBASE-14983-v9.patch, 
> HBASE-14983.patch, Screen Shot 2015-12-15 at 3.33.09 PM.png
>
>
> Missing a root index block is worse than missing a data block. We should know 
> the difference



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15657) Failed snapshot verification may not be detected by TakeSnapshotHandler

2016-04-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15657:
---
Attachment: 15657-master.log

Here is part of the master log with some redaction.

Note the exception from CleanerChore - HBASE-15621 was not in place.

> Failed snapshot verification may not be detected by TakeSnapshotHandler
> ---
>
> Key: HBASE-15657
> URL: https://issues.apache.org/jira/browse/HBASE-15657
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15657-master.log, 15657.v1.patch
>
>
> {code}
> 2016-04-13 07:41:09,572 INFO  
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.Procedure: Procedure 'snapshot_tb1_create' execution completed
> 2016-04-13 07:41:09,573 DEBUG 
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.Procedure: Running finish phase.
> 2016-04-13 07:41:09,573 DEBUG 
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.Procedure: Finished coordinator procedure - removing self from list 
> of  running procedures
> 2016-04-13 07:41:09,573 DEBUG 
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.ZKProcedureCoordinatorRpcs: Attempting to clean out zk node for op: 
> snapshot_tb1_create
> 2016-04-13 07:41:09,573 INFO  
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.ZKProcedureUtil: Clearing all znodes for procedure
> {code}
> Encountered a case where snapshot verification failed:
> {code}
> 2016-04-13 07:41:12,308 ERROR [MASTER_TABLE_OPERATIONS-10.0.0.75:16000-0] 
> executor.EventHandler: Caught throwable while processing event 
> C_M_SNAPSHOT_TABLE
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.internalGetFieldAccessorTable(SnapshotProtos.java:3883)
>   at 
> com.google.protobuf.GeneratedMessage.getDescriptorForType(GeneratedMessage.java:98)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:789)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:780)
>   at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>   at 
> com.google.protobuf.AbstractMessage.newUninitializedMessageException(AbstractMessage.java:237)
>   at 
> com.google.protobuf.AbstractParser.newUninitializedMessageException(AbstractParser.java:57)
>   at 
> com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:71)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.parseFrom(SnapshotProtos.java:4094)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.readDataManifest(SnapshotManifest.java:433)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.load(SnapshotManifest.java:273)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.open(SnapshotManifest.java:119)
>   at 
> org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifySnapshot(MasterSnapshotVerifier.java:108)
>   at 
> org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.process(TakeSnapshotHandler.java:200)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Note NoClassDefFoundError is not Exception. So it was not caught by the 
> following clause in TakeSnapshotHandler :
> {code}
> } catch (Exception e) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15454) Archive store files older than max age

2016-04-14 Thread Clara Xiong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15241858#comment-15241858
 ] 

Clara Xiong commented on HBASE-15454:
-

For major compaction, we have a laundry list of checks which is in 
shouldPerformMajorCompaction to determine whether we should do major compaction 
at a CompactionChecker run. You probably want that too for this special 
compaction. And in SortedCompactionPolicy.selectCompaction, we spend the effort 
to determin whether we can actually do major compaction: fileCompacting, 
reference, file count < max file allowed. Please have the logic there than 
passing in candidate files.

Or if you believe this compaction is mutually exclusive/interchangeable with 
major compaction. which IS as I understand from my interpretation, you can even 
make it interchangeable with major compaction with a config switch so you use 
use/extend the logic for triggering. 

> Archive store files older than max age
> --
>
> Key: HBASE-15454
> URL: https://issues.apache.org/jira/browse/HBASE-15454
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 1.3.0, 0.98.18, 1.4.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0, 1.3.0, 0.98.19, 1.4.0
>
> Attachments: HBASE-15454-v1.patch, HBASE-15454-v2.patch, 
> HBASE-15454-v3.patch, HBASE-15454.patch
>
>
> Sometimes the old data is rarely touched but we can not remove it. So archive 
> it to several big files(by year or something) and use EC to reduce the 
> redundancy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15657) Failed snapshot verification may not be detected by TakeSnapshotHandler

2016-04-14 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15241828#comment-15241828
 ] 

Matteo Bertozzi commented on HBASE-15657:
-

Master verifier is a verifier on master. who creates the bad manifest is on the 
RS. unless the region is offline and it is created by the master

> Failed snapshot verification may not be detected by TakeSnapshotHandler
> ---
>
> Key: HBASE-15657
> URL: https://issues.apache.org/jira/browse/HBASE-15657
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15657.v1.patch
>
>
> {code}
> 2016-04-13 07:41:09,572 INFO  
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.Procedure: Procedure 'snapshot_tb1_create' execution completed
> 2016-04-13 07:41:09,573 DEBUG 
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.Procedure: Running finish phase.
> 2016-04-13 07:41:09,573 DEBUG 
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.Procedure: Finished coordinator procedure - removing self from list 
> of  running procedures
> 2016-04-13 07:41:09,573 DEBUG 
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.ZKProcedureCoordinatorRpcs: Attempting to clean out zk node for op: 
> snapshot_tb1_create
> 2016-04-13 07:41:09,573 INFO  
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.ZKProcedureUtil: Clearing all znodes for procedure
> {code}
> Encountered a case where snapshot verification failed:
> {code}
> 2016-04-13 07:41:12,308 ERROR [MASTER_TABLE_OPERATIONS-10.0.0.75:16000-0] 
> executor.EventHandler: Caught throwable while processing event 
> C_M_SNAPSHOT_TABLE
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.internalGetFieldAccessorTable(SnapshotProtos.java:3883)
>   at 
> com.google.protobuf.GeneratedMessage.getDescriptorForType(GeneratedMessage.java:98)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:789)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:780)
>   at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>   at 
> com.google.protobuf.AbstractMessage.newUninitializedMessageException(AbstractMessage.java:237)
>   at 
> com.google.protobuf.AbstractParser.newUninitializedMessageException(AbstractParser.java:57)
>   at 
> com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:71)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.parseFrom(SnapshotProtos.java:4094)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.readDataManifest(SnapshotManifest.java:433)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.load(SnapshotManifest.java:273)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.open(SnapshotManifest.java:119)
>   at 
> org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifySnapshot(MasterSnapshotVerifier.java:108)
>   at 
> org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.process(TakeSnapshotHandler.java:200)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Note NoClassDefFoundError is not Exception. So it was not caught by the 
> following clause in TakeSnapshotHandler :
> {code}
> } catch (Exception e) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15657) Failed snapshot verification may not be detected by TakeSnapshotHandler

2016-04-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15241821#comment-15241821
 ] 

Ted Yu commented on HBASE-15657:


Interesting.
Though MasterSnapshotVerifier is a class on master.

> Failed snapshot verification may not be detected by TakeSnapshotHandler
> ---
>
> Key: HBASE-15657
> URL: https://issues.apache.org/jira/browse/HBASE-15657
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15657.v1.patch
>
>
> {code}
> 2016-04-13 07:41:09,572 INFO  
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.Procedure: Procedure 'snapshot_tb1_create' execution completed
> 2016-04-13 07:41:09,573 DEBUG 
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.Procedure: Running finish phase.
> 2016-04-13 07:41:09,573 DEBUG 
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.Procedure: Finished coordinator procedure - removing self from list 
> of  running procedures
> 2016-04-13 07:41:09,573 DEBUG 
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.ZKProcedureCoordinatorRpcs: Attempting to clean out zk node for op: 
> snapshot_tb1_create
> 2016-04-13 07:41:09,573 INFO  
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.ZKProcedureUtil: Clearing all znodes for procedure
> {code}
> Encountered a case where snapshot verification failed:
> {code}
> 2016-04-13 07:41:12,308 ERROR [MASTER_TABLE_OPERATIONS-10.0.0.75:16000-0] 
> executor.EventHandler: Caught throwable while processing event 
> C_M_SNAPSHOT_TABLE
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.internalGetFieldAccessorTable(SnapshotProtos.java:3883)
>   at 
> com.google.protobuf.GeneratedMessage.getDescriptorForType(GeneratedMessage.java:98)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:789)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:780)
>   at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>   at 
> com.google.protobuf.AbstractMessage.newUninitializedMessageException(AbstractMessage.java:237)
>   at 
> com.google.protobuf.AbstractParser.newUninitializedMessageException(AbstractParser.java:57)
>   at 
> com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:71)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.parseFrom(SnapshotProtos.java:4094)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.readDataManifest(SnapshotManifest.java:433)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.load(SnapshotManifest.java:273)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.open(SnapshotManifest.java:119)
>   at 
> org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifySnapshot(MasterSnapshotVerifier.java:108)
>   at 
> org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.process(TakeSnapshotHandler.java:200)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Note NoClassDefFoundError is not Exception. So it was not caught by the 
> following clause in TakeSnapshotHandler :
> {code}
> } catch (Exception e) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15638) Shade protobuf

2016-04-14 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15241822#comment-15241822
 ] 

Sean Busbey commented on HBASE-15638:
-

we can't rely on shading doing relocation within the hadoop classes with our 
current deployment instructions. we expressly tell downstream folks to replace 
hte hadoop jars we ship with ones from their hadoop installation.

> Shade protobuf
> --
>
> Key: HBASE-15638
> URL: https://issues.apache.org/jira/browse/HBASE-15638
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Reporter: stack
> Attachments: 15638v2.patch, as.far.as.server.patch
>
>
> Shade protobufs so we can move to a different version without breaking the 
> world. We want to get up on pb3 because it has unsafe methods that allow us 
> save on copies; it also has some means of dealing with BBs so we can pass it 
> offheap DBBs. We'll probably want to change PB3 to open it up some more too 
> so we can stay offheap as we traverse PB. This issue comes of [~anoop.hbase] 
> and [~ram_krish]'s offheaping of the readpath work.
> This change is mostly straight-forward but there are some tricky bits:
>  # How to interface with HDFS? It wants its ByteStrings. Here in particular 
> in FanOutOneBlockAsyncDFSOutputSaslHelper:
> {code}
>   if (payload != null) {
> builder.setPayload(ByteString.copyFrom(payload));
>   }
> {code}
>  # [~busbey] also points out that we need to take care of endpoints done as 
> pb. Test at least.
> Let me raise this one on the dev list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15657) Failed snapshot verification may not be detected by TakeSnapshotHandler

2016-04-14 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15241804#comment-15241804
 ] 

Matteo Bertozzi commented on HBASE-15657:
-

The useful information are probably in the RS log. since the master 
coordination said that everything was ok.

> Failed snapshot verification may not be detected by TakeSnapshotHandler
> ---
>
> Key: HBASE-15657
> URL: https://issues.apache.org/jira/browse/HBASE-15657
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15657.v1.patch
>
>
> {code}
> 2016-04-13 07:41:09,572 INFO  
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.Procedure: Procedure 'snapshot_tb1_create' execution completed
> 2016-04-13 07:41:09,573 DEBUG 
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.Procedure: Running finish phase.
> 2016-04-13 07:41:09,573 DEBUG 
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.Procedure: Finished coordinator procedure - removing self from list 
> of  running procedures
> 2016-04-13 07:41:09,573 DEBUG 
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.ZKProcedureCoordinatorRpcs: Attempting to clean out zk node for op: 
> snapshot_tb1_create
> 2016-04-13 07:41:09,573 INFO  
> [(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
> procedure.ZKProcedureUtil: Clearing all znodes for procedure
> {code}
> Encountered a case where snapshot verification failed:
> {code}
> 2016-04-13 07:41:12,308 ERROR [MASTER_TABLE_OPERATIONS-10.0.0.75:16000-0] 
> executor.EventHandler: Caught throwable while processing event 
> C_M_SNAPSHOT_TABLE
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.internalGetFieldAccessorTable(SnapshotProtos.java:3883)
>   at 
> com.google.protobuf.GeneratedMessage.getDescriptorForType(GeneratedMessage.java:98)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:789)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:780)
>   at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>   at 
> com.google.protobuf.AbstractMessage.newUninitializedMessageException(AbstractMessage.java:237)
>   at 
> com.google.protobuf.AbstractParser.newUninitializedMessageException(AbstractParser.java:57)
>   at 
> com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:71)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.parseFrom(SnapshotProtos.java:4094)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.readDataManifest(SnapshotManifest.java:433)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.load(SnapshotManifest.java:273)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.open(SnapshotManifest.java:119)
>   at 
> org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifySnapshot(MasterSnapshotVerifier.java:108)
>   at 
> org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.process(TakeSnapshotHandler.java:200)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Note NoClassDefFoundError is not Exception. So it was not caught by the 
> following clause in TakeSnapshotHandler :
> {code}
> } catch (Exception e) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15657) Failed snapshot verification may not be detected by TakeSnapshotHandler

2016-04-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15657:
---
Description: 
{code}
2016-04-13 07:41:09,572 INFO  
[(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
procedure.Procedure: Procedure 'snapshot_tb1_create' execution completed
2016-04-13 07:41:09,573 DEBUG 
[(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
procedure.Procedure: Running finish phase.
2016-04-13 07:41:09,573 DEBUG 
[(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
procedure.Procedure: Finished coordinator procedure - removing self from list 
of  running procedures
2016-04-13 07:41:09,573 DEBUG 
[(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
procedure.ZKProcedureCoordinatorRpcs: Attempting to clean out zk node for op:   
  snapshot_tb1_create
2016-04-13 07:41:09,573 INFO  
[(10.0.0.75,16000,1460531913905)-proc-coordinator-pool1-thread-1] 
procedure.ZKProcedureUtil: Clearing all znodes for procedure
{code}
Encountered a case where snapshot verification failed:
{code}
2016-04-13 07:41:12,308 ERROR [MASTER_TABLE_OPERATIONS-10.0.0.75:16000-0] 
executor.EventHandler: Caught throwable while processing event 
C_M_SNAPSHOT_TABLE
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos
  at 
org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.internalGetFieldAccessorTable(SnapshotProtos.java:3883)
  at 
com.google.protobuf.GeneratedMessage.getDescriptorForType(GeneratedMessage.java:98)
  at 
com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:789)
  at 
com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:780)
  at 
com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
  at 
com.google.protobuf.AbstractMessage.newUninitializedMessageException(AbstractMessage.java:237)
  at 
com.google.protobuf.AbstractParser.newUninitializedMessageException(AbstractParser.java:57)
  at 
com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:71)
  at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
  at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
  at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
  at 
org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.parseFrom(SnapshotProtos.java:4094)
  at 
org.apache.hadoop.hbase.snapshot.SnapshotManifest.readDataManifest(SnapshotManifest.java:433)
  at 
org.apache.hadoop.hbase.snapshot.SnapshotManifest.load(SnapshotManifest.java:273)
  at 
org.apache.hadoop.hbase.snapshot.SnapshotManifest.open(SnapshotManifest.java:119)
  at 
org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifySnapshot(MasterSnapshotVerifier.java:108)
  at 
org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.process(TakeSnapshotHandler.java:200)
  at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at java.lang.Thread.run(Thread.java:745)
{code}
Note NoClassDefFoundError is not Exception. So it was not caught by the 
following clause in TakeSnapshotHandler :
{code}
} catch (Exception e) {
{code}

  was:
Encountered a case where snapshot verification failed:
{code}
2016-04-13 07:41:12,308 ERROR [MASTER_TABLE_OPERATIONS-10.0.0.75:16000-0] 
executor.EventHandler: Caught throwable while processing event 
C_M_SNAPSHOT_TABLE
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos
  at 
org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.internalGetFieldAccessorTable(SnapshotProtos.java:3883)
  at 
com.google.protobuf.GeneratedMessage.getDescriptorForType(GeneratedMessage.java:98)
  at 
com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:789)
  at 
com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:780)
  at 
com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
  at 
com.google.protobuf.AbstractMessage.newUninitializedMessageException(AbstractMessage.java:237)
  at 
com.google.protobuf.AbstractParser.newUninitializedMessageException(AbstractParser.java:57)
  at 
com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:71)
  at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
  at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
  at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
  at 
org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$Snapsho

[jira] [Commented] (HBASE-15657) Failed snapshot verification may not be detected by TakeSnapshotHandler

2016-04-14 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15241793#comment-15241793
 ] 

Ted Yu commented on HBASE-15657:


Will attach master log after redaction.

The symptom of this issue was that client side timed out waiting for snapshot 
to finish:
{code}
2016-04-13 
07:45:53,018|machine|INFO|20241|139633983403840|MainThread|2016-04-13 
07:45:53,012 INFO [adlfs2] sink.AdlAzureIaasSink: inside put 
metricsadl_server_errors 0
2016-04-13 07:46:03,153|machine|INFO|20241|139633983403840|MainThread|
2016-04-13 07:46:03,153|machine|INFO|20241|139633983403840|MainThread|ERROR: 
Snapshot 'snapshot_tb1_create' wasn't completed in expectedTime:30 ms
2016-04-13 07:46:03,154|machine|INFO|20241|139633983403840|MainThread|
2016-04-13 07:46:03,160|machine|INFO|20241|139633983403840|MainThread|Here is 
some help for this command:
2016-04-13 07:46:03,160|machine|INFO|20241|139633983403840|MainThread|Take a 
snapshot of specified table. Examples:
{code}

> Failed snapshot verification may not be detected by TakeSnapshotHandler
> ---
>
> Key: HBASE-15657
> URL: https://issues.apache.org/jira/browse/HBASE-15657
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15657.v1.patch
>
>
> Encountered a case where snapshot verification failed:
> {code}
> 2016-04-13 07:41:12,308 ERROR [MASTER_TABLE_OPERATIONS-10.0.0.75:16000-0] 
> executor.EventHandler: Caught throwable while processing event 
> C_M_SNAPSHOT_TABLE
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.internalGetFieldAccessorTable(SnapshotProtos.java:3883)
>   at 
> com.google.protobuf.GeneratedMessage.getDescriptorForType(GeneratedMessage.java:98)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:789)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:780)
>   at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>   at 
> com.google.protobuf.AbstractMessage.newUninitializedMessageException(AbstractMessage.java:237)
>   at 
> com.google.protobuf.AbstractParser.newUninitializedMessageException(AbstractParser.java:57)
>   at 
> com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:71)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.parseFrom(SnapshotProtos.java:4094)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.readDataManifest(SnapshotManifest.java:433)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.load(SnapshotManifest.java:273)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.open(SnapshotManifest.java:119)
>   at 
> org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifySnapshot(MasterSnapshotVerifier.java:108)
>   at 
> org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.process(TakeSnapshotHandler.java:200)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Note NoClassDefFoundError is not Exception. So it was not caught by the 
> following clause in TakeSnapshotHandler :
> {code}
> } catch (Exception e) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15657) Failed snapshot verification may not be detected by TakeSnapshotHandler

2016-04-14 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15241782#comment-15241782
 ] 

Matteo Bertozzi commented on HBASE-15657:
-

how did you get into that case?
do you have Master and RS logs to share?

> Failed snapshot verification may not be detected by TakeSnapshotHandler
> ---
>
> Key: HBASE-15657
> URL: https://issues.apache.org/jira/browse/HBASE-15657
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15657.v1.patch
>
>
> Encountered a case where snapshot verification failed:
> {code}
> 2016-04-13 07:41:12,308 ERROR [MASTER_TABLE_OPERATIONS-10.0.0.75:16000-0] 
> executor.EventHandler: Caught throwable while processing event 
> C_M_SNAPSHOT_TABLE
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.internalGetFieldAccessorTable(SnapshotProtos.java:3883)
>   at 
> com.google.protobuf.GeneratedMessage.getDescriptorForType(GeneratedMessage.java:98)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:789)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:780)
>   at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>   at 
> com.google.protobuf.AbstractMessage.newUninitializedMessageException(AbstractMessage.java:237)
>   at 
> com.google.protobuf.AbstractParser.newUninitializedMessageException(AbstractParser.java:57)
>   at 
> com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:71)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.parseFrom(SnapshotProtos.java:4094)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.readDataManifest(SnapshotManifest.java:433)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.load(SnapshotManifest.java:273)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.open(SnapshotManifest.java:119)
>   at 
> org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifySnapshot(MasterSnapshotVerifier.java:108)
>   at 
> org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.process(TakeSnapshotHandler.java:200)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Note NoClassDefFoundError is not Exception. So it was not caught by the 
> following clause in TakeSnapshotHandler :
> {code}
> } catch (Exception e) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15657) Failed snapshot verification may not be detected by TakeSnapshotHandler

2016-04-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15657:
---
Attachment: 15657.v1.patch

> Failed snapshot verification may not be detected by TakeSnapshotHandler
> ---
>
> Key: HBASE-15657
> URL: https://issues.apache.org/jira/browse/HBASE-15657
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15657.v1.patch
>
>
> Encountered a case where snapshot verification failed:
> {code}
> 2016-04-13 07:41:12,308 ERROR [MASTER_TABLE_OPERATIONS-10.0.0.75:16000-0] 
> executor.EventHandler: Caught throwable while processing event 
> C_M_SNAPSHOT_TABLE
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.internalGetFieldAccessorTable(SnapshotProtos.java:3883)
>   at 
> com.google.protobuf.GeneratedMessage.getDescriptorForType(GeneratedMessage.java:98)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:789)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:780)
>   at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>   at 
> com.google.protobuf.AbstractMessage.newUninitializedMessageException(AbstractMessage.java:237)
>   at 
> com.google.protobuf.AbstractParser.newUninitializedMessageException(AbstractParser.java:57)
>   at 
> com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:71)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.parseFrom(SnapshotProtos.java:4094)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.readDataManifest(SnapshotManifest.java:433)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.load(SnapshotManifest.java:273)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.open(SnapshotManifest.java:119)
>   at 
> org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifySnapshot(MasterSnapshotVerifier.java:108)
>   at 
> org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.process(TakeSnapshotHandler.java:200)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Note NoClassDefFoundError is not Exception. So it was not caught by the 
> following clause in TakeSnapshotHandler :
> {code}
> } catch (Exception e) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15657) Failed snapshot verification may not be detected by TakeSnapshotHandler

2016-04-14 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-15657:
---
Status: Patch Available  (was: Open)

> Failed snapshot verification may not be detected by TakeSnapshotHandler
> ---
>
> Key: HBASE-15657
> URL: https://issues.apache.org/jira/browse/HBASE-15657
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Attachments: 15657.v1.patch
>
>
> Encountered a case where snapshot verification failed:
> {code}
> 2016-04-13 07:41:12,308 ERROR [MASTER_TABLE_OPERATIONS-10.0.0.75:16000-0] 
> executor.EventHandler: Caught throwable while processing event 
> C_M_SNAPSHOT_TABLE
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.internalGetFieldAccessorTable(SnapshotProtos.java:3883)
>   at 
> com.google.protobuf.GeneratedMessage.getDescriptorForType(GeneratedMessage.java:98)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:789)
>   at 
> com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:780)
>   at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
>   at 
> com.google.protobuf.AbstractMessage.newUninitializedMessageException(AbstractMessage.java:237)
>   at 
> com.google.protobuf.AbstractParser.newUninitializedMessageException(AbstractParser.java:57)
>   at 
> com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:71)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
>   at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.parseFrom(SnapshotProtos.java:4094)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.readDataManifest(SnapshotManifest.java:433)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.load(SnapshotManifest.java:273)
>   at 
> org.apache.hadoop.hbase.snapshot.SnapshotManifest.open(SnapshotManifest.java:119)
>   at 
> org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifySnapshot(MasterSnapshotVerifier.java:108)
>   at 
> org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.process(TakeSnapshotHandler.java:200)
>   at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> Note NoClassDefFoundError is not Exception. So it was not caught by the 
> following clause in TakeSnapshotHandler :
> {code}
> } catch (Exception e) {
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15579) Procedure V2 - Remove synchronized around nonce in Procedure submit

2016-04-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15241772#comment-15241772
 ] 

Hadoop QA commented on HBASE-15579:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} master passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} master passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
31s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
1s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
89m 41s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 34s 
{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 131m 22s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798772/HBASE-15579-v1.patch |
| JIRA Issue | HBASE-15579 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux asf911.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7efb9ed |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/usr/local/jenkins/java/jdk1.7.0_79:1.7.0_79 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1418/testReport/ |
| modules

[jira] [Created] (HBASE-15657) Failed snapshot verification may not be detected by TakeSnapshotHandler

2016-04-14 Thread Ted Yu (JIRA)
Ted Yu created HBASE-15657:
--

 Summary: Failed snapshot verification may not be detected by 
TakeSnapshotHandler
 Key: HBASE-15657
 URL: https://issues.apache.org/jira/browse/HBASE-15657
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu


Encountered a case where snapshot verification failed:
{code}
2016-04-13 07:41:12,308 ERROR [MASTER_TABLE_OPERATIONS-10.0.0.75:16000-0] 
executor.EventHandler: Caught throwable while processing event 
C_M_SNAPSHOT_TABLE
java.lang.NoClassDefFoundError: Could not initialize class 
org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos
  at 
org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.internalGetFieldAccessorTable(SnapshotProtos.java:3883)
  at 
com.google.protobuf.GeneratedMessage.getDescriptorForType(GeneratedMessage.java:98)
  at 
com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:789)
  at 
com.google.protobuf.AbstractMessage$Builder.findMissingFields(AbstractMessage.java:780)
  at 
com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:770)
  at 
com.google.protobuf.AbstractMessage.newUninitializedMessageException(AbstractMessage.java:237)
  at 
com.google.protobuf.AbstractParser.newUninitializedMessageException(AbstractParser.java:57)
  at 
com.google.protobuf.AbstractParser.checkMessageInitialized(AbstractParser.java:71)
  at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:217)
  at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:223)
  at com.google.protobuf.AbstractParser.parseFrom(AbstractParser.java:49)
  at 
org.apache.hadoop.hbase.protobuf.generated.SnapshotProtos$SnapshotDataManifest.parseFrom(SnapshotProtos.java:4094)
  at 
org.apache.hadoop.hbase.snapshot.SnapshotManifest.readDataManifest(SnapshotManifest.java:433)
  at 
org.apache.hadoop.hbase.snapshot.SnapshotManifest.load(SnapshotManifest.java:273)
  at 
org.apache.hadoop.hbase.snapshot.SnapshotManifest.open(SnapshotManifest.java:119)
  at 
org.apache.hadoop.hbase.master.snapshot.MasterSnapshotVerifier.verifySnapshot(MasterSnapshotVerifier.java:108)
  at 
org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.process(TakeSnapshotHandler.java:200)
  at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
  at java.lang.Thread.run(Thread.java:745)
{code}
Note NoClassDefFoundError is not Exception. So it was not caught by the 
following clause in TakeSnapshotHandler :
{code}
} catch (Exception e) {
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15405) Synchronize final results logging single thread in PE, fix wrong defaults in help message

2016-04-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15241744#comment-15241744
 ] 

Hudson commented on HBASE-15405:


SUCCESS: Integrated in HBase-1.3-IT #613 (See 
[https://builds.apache.org/job/HBase-1.3-IT/613/])
HBASE-15405 Fix PE logging and wrong defaults in help message. (stack: rev 
6d2dc2a8bd9f9684143a1663ccc7470b2add4643)
* hbase-server/src/test/java/org/apache/hadoop/hbase/PerformanceEvaluation.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/YammerHistogramUtils.java


> Synchronize final results logging single thread in PE, fix wrong defaults in 
> help message
> -
>
> Key: HBASE-15405
> URL: https://issues.apache.org/jira/browse/HBASE-15405
> Project: HBase
>  Issue Type: Bug
>  Components: Performance
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Fix For: 2.0.0, 1.3.0, 1.4.0
>
> Attachments: HBASE-15405-branch-1.patch, HBASE-15405-master-v2.patch, 
> HBASE-15405-master-v3.patch, HBASE-15405-master-v4 (1).patch, 
> HBASE-15405-master-v4 (1).patch, HBASE-15405-master-v4.patch, 
> HBASE-15405-master-v4.patch, HBASE-15405-master-v5.patch, 
> HBASE-15405-master-v6.patch, HBASE-15405-master.patch, 
> HBASE-15405-master.patch
>
>
> Corrects wrong default values for few options in the help message.
> Final stats from multiple clients are intermingled making it hard to 
> understand. Also the logged stats aren't very machine readable. It can be 
> helpful in a daily perf testing rig which scraps logs for results.
> Example of logs before the change.
> {noformat}
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: 0/1048570/1048576, 
> latency mean=953.98, min=359.00, max=324050.00, stdDev=851.82, 95th=1368.00, 
> 99th=1625.00
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: 0/1048570/1048576, 
> latency mean=953.92, min=356.00, max=323394.00, stdDev=817.55, 95th=1370.00, 
> 99th=1618.00
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: 0/1048570/1048576, 
> latency mean=953.98, min=367.00, max=322745.00, stdDev=840.43, 95th=1369.00, 
> 99th=1622.00
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest latency log 
> (microseconds), on 1048576 measures
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Min  = 
> 375.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Min  = 
> 363.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Avg  = 
> 953.6624126434326
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Avg  = 
> 953.4124526977539
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest StdDev   = 
> 781.3929776087633
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest StdDev   = 
> 742.8027916717297
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 50th = 
> 894.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 50th = 
> 894.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 75th = 
> 1070.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 75th = 
> 1071.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 95th = 
> 1369.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 95th = 
> 1369.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 99th = 
> 1623.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 99th = 
> 1624.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest Min  = 
> 372.0
> 16/03/05 22:43:06 INFO hbase.PerformanceEvaluation: IncrementTest 99.9th   = 
> 3013

[jira] [Commented] (HBASE-15045) Keep hbase-native-client/if and hbase-protocol in sync.

2016-04-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15241737#comment-15241737
 ] 

Hadoop QA commented on HBASE-15045:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HBASE-15045 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.2.1/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798793/HBASE-15045-v2.patch |
| JIRA Issue | HBASE-15045 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1420/console |
| Powered by | Apache Yetus 0.2.1   http://yetus.apache.org |


This message was automatically generated.



> Keep hbase-native-client/if and hbase-protocol in sync.
> ---
>
> Key: HBASE-15045
> URL: https://issues.apache.org/jira/browse/HBASE-15045
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-15045-v2.patch, HBASE-15045.patch
>
>
> We want to make sure that .protos are in sync with java. So keep it in sync.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-15045) Keep hbase-native-client/if and hbase-protocol in sync.

2016-04-14 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15241728#comment-15241728
 ] 

Elliott Clark commented on HBASE-15045:
---

I added some cleanup and some more documentation.

> Keep hbase-native-client/if and hbase-protocol in sync.
> ---
>
> Key: HBASE-15045
> URL: https://issues.apache.org/jira/browse/HBASE-15045
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-15045-v2.patch, HBASE-15045.patch
>
>
> We want to make sure that .protos are in sync with java. So keep it in sync.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15045) Keep hbase-native-client/if and hbase-protocol in sync.

2016-04-14 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-15045:
--
Attachment: HBASE-15045-v2.patch

https://reviews.facebook.net/D56763

> Keep hbase-native-client/if and hbase-protocol in sync.
> ---
>
> Key: HBASE-15045
> URL: https://issues.apache.org/jira/browse/HBASE-15045
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-15045-v2.patch, HBASE-15045.patch
>
>
> We want to make sure that .protos are in sync with java. So keep it in sync.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-15656) Fix unused protobuf warning in Admin.proto

2016-04-14 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-15656:
-

 Summary: Fix unused protobuf warning in Admin.proto
 Key: HBASE-15656
 URL: https://issues.apache.org/jira/browse/HBASE-15656
 Project: HBase
  Issue Type: Task
Reporter: Elliott Clark
Priority: Minor


{code}
Warning: Unused import: "Admin.proto" imports "Client.proto" which is not used.
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-15648) Reduce number of concurrent region location lookups when MetaCache entry is cleared

2016-04-14 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-15648:

Summary: Reduce number of concurrent region location lookups when MetaCache 
entry is cleared  (was: Reduce possible number of concurrent region location 
lookups when MetaCache entry is cleared)

> Reduce number of concurrent region location lookups when MetaCache entry is 
> cleared
> ---
>
> Key: HBASE-15648
> URL: https://issues.apache.org/jira/browse/HBASE-15648
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.3.0
>Reporter: Mikhail Antonov
>Assignee: Mikhail Antonov
> Fix For: 1.3.0
>
>
> It seems in HConnectionImplementation#locateRegionInMeta if region location 
> is removed from the cache, with large number of client threads we could have 
> many of them getting cache miss and doing meta scan, which looks unnecessary 
> - we could empty mechanism similar to what we have in IdLock in HFileReader 
> to fetch the block to cache, do ensure that if one thread is already looking 
> up location for region R1, other threads who need it's location wait until 
> first thread finishes his work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >