[jira] [Commented] (HBASE-18112) Write RequestTooBigException back to client for NettyRpcServer

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272293#comment-16272293
 ] 

Hadoop QA commented on HBASE-18112:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
54s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} hbase-server: The patch generated 0 new + 6 
unchanged - 1 fixed = 6 total (was 7) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
56s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
51m 27s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 90m  
2s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}162m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-18112 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899948/HBASE-18112-v5.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux e96f96761021 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 79a89beb2e |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10135/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10135/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Write RequestTooBigException back to client for NettyRpcServer
> 

[jira] [Updated] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-29 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19344:
--
Attachment: HBASE-19344.patch

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0-beta-1
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0
>
> Attachments: HBASE-19344-branch-ycsb-1.png, 
> HBASE-19344-branch.ycsb.png, HBASE-19344-branch.ycsb.png, 
> HBASE-19344-branch2.patch, HBASE-19344-branch2.patch.2.POC, 
> HBASE-19344.patch, wal-1-test-result.png, wal-8-test-result.png, 
> ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19336) Improve rsgroup to allow assign all tables within a specified namespace by only writing namespace

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272286#comment-16272286
 ] 

Hadoop QA commented on HBASE-19336:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} rubocop {color} | {color:red}  0m 
11s{color} | {color:red} The patch generated 20 new + 45 unchanged - 0 fixed = 
65 total (was 45) {color} |
| {color:red}-1{color} | {color:red} ruby-lint {color} | {color:red}  0m  
4s{color} | {color:red} The patch generated 22 new + 45 unchanged - 0 fixed = 
67 total (was 45) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m  
8s{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 58s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19336 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899959/HBASE-19336-master-V5.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  rubocop  ruby_lint  |
| uname | Linux 8b8dcf1884f1 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9434d52c19 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| rubocop | v0.51.0 |
| rubocop | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10137/artifact/patchprocess/diff-patch-rubocop.txt
 |
| ruby-lint | v2.3.1 |
| ruby-lint | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10137/artifact/patchprocess/diff-patch-ruby-lint.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10137/testReport/ |
| modules | C: hbase-shell U: hbase-shell |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10137/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Improve rsgroup to allow assign all tables within a specified namespace by 
> only writing namespace
> -
>
> Key: HBASE-19336
> URL: https://issues.apache.org/jira/browse/HBASE-19336
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Affects Versions: 2.0.0-alpha-4
>Reporter: xinxin fan
>Assignee: xinxin fan
> Attachments: HBASE-19336-master-V2.patch, 
> HBASE-19336-master-V3.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V4.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V5.patch, HBASE-19336-master.patch
>
>
> Currently, use can only assign tables within a namespace from one group to 
> another by writing all table names in move_tables_rsgroup command. Allowing 
> to assign all tables within a specifed namespace by only wirting namespace 
> name is useful.
> Usage as follows:
> {code:java}
> hbase(main):055:0> move_namespaces_rsgroup 'dest_rsgroup',['ns1']
> Took 2.2211 seconds
> {code}
> {code:java}
> 

[jira] [Commented] (HBASE-19385) [1.3] TestReplicator failed 1.3 nightly

2017-11-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272285#comment-16272285
 ] 

Hudson commented on HBASE-19385:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #302 (See 
[https://builds.apache.org/job/HBase-1.3-IT/302/])
HBASE-19385 [1.3] TestReplicator failed 1.3 nightly (stack: rev 
04f1029c03cca0c3303595fec5d654a304db2c03)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestReplicator.java


> [1.3] TestReplicator failed 1.3 nightly
> ---
>
> Key: HBASE-19385
> URL: https://issues.apache.org/jira/browse/HBASE-19385
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.3.2, 1.4.1
>
> Attachments: HBASE-19385.branch-1.3.001.patch
>
>
> TestReplicator failed 1.3 nightly. Running it local, it fails sometimes. 
> Complaint is illegalmonitorstate  and indeed, locking around latch is unsafe. 
> Fixing this, I can't get it to fail locally anymore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19387) HBase-spark snappy.SnappyError on Arm64

2017-11-29 Thread Yuqi Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272275#comment-16272275
 ] 

Yuqi Gu commented on HBASE-19387:
-

Everything is ok for running hbase-spark unit tests both on x86 and Arm64.

> HBase-spark snappy.SnappyError on Arm64
> ---
>
> Key: HBASE-19387
> URL: https://issues.apache.org/jira/browse/HBASE-19387
> Project: HBase
>  Issue Type: Bug
>  Components: spark, test
>Affects Versions: 3.0.0
>Reporter: Yuqi Gu
>Priority: Minor
> Attachments: HBASE-19387.patch
>
>
> When running the hbase-spark Unit tests on Arm64, the failures are shown as 
> follows:
>  
> {code:java}
> scalatest-maven-plugin:1.0:test (test) @ hbase-spark ---
> Discovery starting.
> Discovery completed in 2 seconds, 837 milliseconds.
> Run starting. Expected test count is: 79
> HBaseDStreamFunctionsSuite:
> Formatting using clusterid: testClusterID
> - bulkput to test HBase client *** FAILED ***
>   java.lang.reflect.InvocationTargetException:
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>   at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>   at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
>   ...
>   Cause: java.lang.IllegalArgumentException: org.xerial.snappy.SnappyError: 
> [FAILED_TO_LOAD_NATIVE_LIBRARY] no native library is found for os.name=Linux 
> and os.arch=aarch64
>   at 
> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:156)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>   at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>   ...
>   Cause: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] no 
> native library is found for os.name=Linux and os.arch=aarch64
>   at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:331)
>   at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171)
>   at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
>   at org.xerial.snappy.Snappy.(Snappy.java:46)
>   at 
> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:154)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   ...
> Formatting using clusterid: testClusterID
> PartitionFilterSuite:
> *** RUN ABORTED ***
>   java.lang.reflect.InvocationTargetException:
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> 

[jira] [Updated] (HBASE-19387) HBase-spark snappy.SnappyError on Arm64

2017-11-29 Thread Yuqi Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Gu updated HBASE-19387:

Status: Patch Available  (was: Open)

> HBase-spark snappy.SnappyError on Arm64
> ---
>
> Key: HBASE-19387
> URL: https://issues.apache.org/jira/browse/HBASE-19387
> Project: HBase
>  Issue Type: Bug
>  Components: spark, test
>Affects Versions: 3.0.0
>Reporter: Yuqi Gu
>Priority: Minor
> Attachments: HBASE-19387.patch
>
>
> When running the hbase-spark Unit tests on Arm64, the failures are shown as 
> follows:
>  
> {code:java}
> scalatest-maven-plugin:1.0:test (test) @ hbase-spark ---
> Discovery starting.
> Discovery completed in 2 seconds, 837 milliseconds.
> Run starting. Expected test count is: 79
> HBaseDStreamFunctionsSuite:
> Formatting using clusterid: testClusterID
> - bulkput to test HBase client *** FAILED ***
>   java.lang.reflect.InvocationTargetException:
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>   at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>   at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
>   ...
>   Cause: java.lang.IllegalArgumentException: org.xerial.snappy.SnappyError: 
> [FAILED_TO_LOAD_NATIVE_LIBRARY] no native library is found for os.name=Linux 
> and os.arch=aarch64
>   at 
> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:156)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>   at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>   ...
>   Cause: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] no 
> native library is found for os.name=Linux and os.arch=aarch64
>   at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:331)
>   at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171)
>   at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
>   at org.xerial.snappy.Snappy.(Snappy.java:46)
>   at 
> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:154)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   ...
> Formatting using clusterid: testClusterID
> PartitionFilterSuite:
> *** RUN ABORTED ***
>   java.lang.reflect.InvocationTargetException:
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> 

[jira] [Commented] (HBASE-19387) HBase-spark snappy.SnappyError on Arm64

2017-11-29 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272274#comment-16272274
 ] 

Ted Yu commented on HBASE-19387:


Looks like Spark uses 1.1.2.6 of snappy-java

If 1.1.2.6 doesn't solve the above problem, how is compatibility between 
1.1.2.6 and 1.1.4 ?

> HBase-spark snappy.SnappyError on Arm64
> ---
>
> Key: HBASE-19387
> URL: https://issues.apache.org/jira/browse/HBASE-19387
> Project: HBase
>  Issue Type: Bug
>  Components: spark, test
>Affects Versions: 3.0.0
>Reporter: Yuqi Gu
>Priority: Minor
> Attachments: HBASE-19387.patch
>
>
> When running the hbase-spark Unit tests on Arm64, the failures are shown as 
> follows:
>  
> {code:java}
> scalatest-maven-plugin:1.0:test (test) @ hbase-spark ---
> Discovery starting.
> Discovery completed in 2 seconds, 837 milliseconds.
> Run starting. Expected test count is: 79
> HBaseDStreamFunctionsSuite:
> Formatting using clusterid: testClusterID
> - bulkput to test HBase client *** FAILED ***
>   java.lang.reflect.InvocationTargetException:
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>   at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>   at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
>   ...
>   Cause: java.lang.IllegalArgumentException: org.xerial.snappy.SnappyError: 
> [FAILED_TO_LOAD_NATIVE_LIBRARY] no native library is found for os.name=Linux 
> and os.arch=aarch64
>   at 
> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:156)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>   at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>   ...
>   Cause: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] no 
> native library is found for os.name=Linux and os.arch=aarch64
>   at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:331)
>   at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171)
>   at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
>   at org.xerial.snappy.Snappy.(Snappy.java:46)
>   at 
> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:154)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   ...
> Formatting using clusterid: testClusterID
> PartitionFilterSuite:
> *** RUN ABORTED ***
>   java.lang.reflect.InvocationTargetException:
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> 

[jira] [Updated] (HBASE-19387) HBase-spark snappy.SnappyError on Arm64

2017-11-29 Thread Yuqi Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Gu updated HBASE-19387:

Attachment: HBASE-19387.patch

> HBase-spark snappy.SnappyError on Arm64
> ---
>
> Key: HBASE-19387
> URL: https://issues.apache.org/jira/browse/HBASE-19387
> Project: HBase
>  Issue Type: Bug
>  Components: spark, test
>Affects Versions: 3.0.0
>Reporter: Yuqi Gu
>Priority: Minor
> Attachments: HBASE-19387.patch
>
>
> When running the hbase-spark Unit tests on Arm64, the failures are shown as 
> follows:
>  
> {code:java}
> scalatest-maven-plugin:1.0:test (test) @ hbase-spark ---
> Discovery starting.
> Discovery completed in 2 seconds, 837 milliseconds.
> Run starting. Expected test count is: 79
> HBaseDStreamFunctionsSuite:
> Formatting using clusterid: testClusterID
> - bulkput to test HBase client *** FAILED ***
>   java.lang.reflect.InvocationTargetException:
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>   at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>   at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
>   ...
>   Cause: java.lang.IllegalArgumentException: org.xerial.snappy.SnappyError: 
> [FAILED_TO_LOAD_NATIVE_LIBRARY] no native library is found for os.name=Linux 
> and os.arch=aarch64
>   at 
> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:156)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>   at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>   ...
>   Cause: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] no 
> native library is found for os.name=Linux and os.arch=aarch64
>   at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:331)
>   at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171)
>   at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
>   at org.xerial.snappy.Snappy.(Snappy.java:46)
>   at 
> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:154)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   ...
> Formatting using clusterid: testClusterID
> PartitionFilterSuite:
> *** RUN ABORTED ***
>   java.lang.reflect.InvocationTargetException:
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> 

[jira] [Commented] (HBASE-19383) [1.2] java.lang.AssertionError: expected:<2> but was:<1> at org.apache.hadoop.hbase.TestChoreService.testTriggerNowFailsWhenNotScheduled(TestChoreService.java:707)

2017-11-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272273#comment-16272273
 ] 

Hudson commented on HBASE-19383:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #301 (See 
[https://builds.apache.org/job/HBase-1.3-IT/301/])
HBASE-19383 [1.2] java.lang.AssertionError: expected:<2> but was:<1> at (stack: 
rev 6891e81955c322cc680c897bd296f1bbe01f668c)
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/TestChoreService.java


> [1.2] java.lang.AssertionError: expected:<2> but was:<1>  at 
> org.apache.hadoop.hbase.TestChoreService.testTriggerNowFailsWhenNotScheduled(TestChoreService.java:707)
> 
>
> Key: HBASE-19383
> URL: https://issues.apache.org/jira/browse/HBASE-19383
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 1.3.2, 1.4.1, 1.2.7, 2.0.0-beta-2
>
> Attachments: 19383.txt
>
>
> A test that is based on timers that asserts hard numbers about how many times 
> something is called. I'm just going to remove it. It killed my 1.2 nightly 
> test run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19387) HBase-spark snappy.SnappyError on Arm64

2017-11-29 Thread Yuqi Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272272#comment-16272272
 ] 

Yuqi Gu commented on HBASE-19387:
-

GH PR: https://github.com/apache/hbase/pull/68

> HBase-spark snappy.SnappyError on Arm64
> ---
>
> Key: HBASE-19387
> URL: https://issues.apache.org/jira/browse/HBASE-19387
> Project: HBase
>  Issue Type: Bug
>  Components: spark, test
>Affects Versions: 3.0.0
>Reporter: Yuqi Gu
>Priority: Minor
>
> When running the hbase-spark Unit tests on Arm64, the failures are shown as 
> follows:
>  
> {code:java}
> scalatest-maven-plugin:1.0:test (test) @ hbase-spark ---
> Discovery starting.
> Discovery completed in 2 seconds, 837 milliseconds.
> Run starting. Expected test count is: 79
> HBaseDStreamFunctionsSuite:
> Formatting using clusterid: testClusterID
> - bulkput to test HBase client *** FAILED ***
>   java.lang.reflect.InvocationTargetException:
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>   at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>   at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
>   ...
>   Cause: java.lang.IllegalArgumentException: org.xerial.snappy.SnappyError: 
> [FAILED_TO_LOAD_NATIVE_LIBRARY] no native library is found for os.name=Linux 
> and os.arch=aarch64
>   at 
> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:156)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>   at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>   ...
>   Cause: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] no 
> native library is found for os.name=Linux and os.arch=aarch64
>   at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:331)
>   at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171)
>   at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
>   at org.xerial.snappy.Snappy.(Snappy.java:46)
>   at 
> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:154)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   ...
> Formatting using clusterid: testClusterID
> PartitionFilterSuite:
> *** RUN ABORTED ***
>   java.lang.reflect.InvocationTargetException:
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> 

[jira] [Updated] (HBASE-19387) HBase-spark snappy.SnappyError on Arm64

2017-11-29 Thread Yuqi Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Gu updated HBASE-19387:

Priority: Minor  (was: Major)

> HBase-spark snappy.SnappyError on Arm64
> ---
>
> Key: HBASE-19387
> URL: https://issues.apache.org/jira/browse/HBASE-19387
> Project: HBase
>  Issue Type: Bug
>  Components: spark, test
>Affects Versions: 3.0.0
>Reporter: Yuqi Gu
>Priority: Minor
>
> When running the hbase-spark Unit tests on Arm64, the failures are shown as 
> follows:
>  
> {code:java}
> scalatest-maven-plugin:1.0:test (test) @ hbase-spark ---
> Discovery starting.
> Discovery completed in 2 seconds, 837 milliseconds.
> Run starting. Expected test count is: 79
> HBaseDStreamFunctionsSuite:
> Formatting using clusterid: testClusterID
> - bulkput to test HBase client *** FAILED ***
>   java.lang.reflect.InvocationTargetException:
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>   at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>   at 
> org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
>   ...
>   Cause: java.lang.IllegalArgumentException: org.xerial.snappy.SnappyError: 
> [FAILED_TO_LOAD_NATIVE_LIBRARY] no native library is found for os.name=Linux 
> and os.arch=aarch64
>   at 
> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:156)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>   at 
> org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
>   ...
>   Cause: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] no 
> native library is found for os.name=Linux and os.arch=aarch64
>   at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:331)
>   at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171)
>   at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
>   at org.xerial.snappy.Snappy.(Snappy.java:46)
>   at 
> org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:154)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   ...
> Formatting using clusterid: testClusterID
> PartitionFilterSuite:
> *** RUN ABORTED ***
>   java.lang.reflect.InvocationTargetException:
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
>   at 
> org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
>   at 
> org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
>   

[jira] [Created] (HBASE-19387) HBase-spark snappy.SnappyError on Arm64

2017-11-29 Thread Yuqi Gu (JIRA)
Yuqi Gu created HBASE-19387:
---

 Summary: HBase-spark snappy.SnappyError on Arm64
 Key: HBASE-19387
 URL: https://issues.apache.org/jira/browse/HBASE-19387
 Project: HBase
  Issue Type: Bug
  Components: spark, test
Affects Versions: 3.0.0
Reporter: Yuqi Gu


When running the hbase-spark Unit tests on Arm64, the failures are shown as 
follows:
 
{code:java}
scalatest-maven-plugin:1.0:test (test) @ hbase-spark ---
Discovery starting.
Discovery completed in 2 seconds, 837 milliseconds.
Run starting. Expected test count is: 79
HBaseDStreamFunctionsSuite:
Formatting using clusterid: testClusterID
- bulkput to test HBase client *** FAILED ***
  java.lang.reflect.InvocationTargetException:
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  at 
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
  at 
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
  at 
org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
  at 
org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
  at 
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
  at 
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
  ...
  Cause: java.lang.IllegalArgumentException: org.xerial.snappy.SnappyError: 
[FAILED_TO_LOAD_NATIVE_LIBRARY] no native library is found for os.name=Linux 
and os.arch=aarch64
  at 
org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:156)
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  at 
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
  at 
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
  at 
org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
  at 
org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
  at 
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
  ...
  Cause: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] no 
native library is found for os.name=Linux and os.arch=aarch64
  at org.xerial.snappy.SnappyLoader.findNativeLibrary(SnappyLoader.java:331)
  at org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:171)
  at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:152)
  at org.xerial.snappy.Snappy.(Snappy.java:46)
  at 
org.apache.spark.io.SnappyCompressionCodec.(CompressionCodec.scala:154)
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  at 
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
  ...
Formatting using clusterid: testClusterID
PartitionFilterSuite:
*** RUN ABORTED ***
  java.lang.reflect.InvocationTargetException:
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  at 
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:72)
  at 
org.apache.spark.io.CompressionCodec$.createCodec(CompressionCodec.scala:65)
  at 
org.apache.spark.broadcast.TorrentBroadcast.org$apache$spark$broadcast$TorrentBroadcast$$setConf(TorrentBroadcast.scala:73)
  at 
org.apache.spark.broadcast.TorrentBroadcast.(TorrentBroadcast.scala:80)
  at 
org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
  at 
org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:63)
  ...
  Cause: java.lang.IllegalArgumentException: java.lang.NoClassDefFoundError: 
Could not initialize class org.xerial.snappy.Snappy
  at 

[jira] [Commented] (HBASE-19382) Update report-flakies.py script to handle yetus builds

2017-11-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272266#comment-16272266
 ] 

stack commented on HBASE-19382:
---

Ok. Thanks boss.

> Update report-flakies.py script to handle yetus builds
> --
>
> Key: HBASE-19382
> URL: https://issues.apache.org/jira/browse/HBASE-19382
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19382.master.001.patch
>
>
> With move to new nightly build which uses yetus 
> (https://builds.apache.org/job/HBase%20Nightly/job/master/), current 
> report-flakies.py is not able to build test report since maven output is not 
> in consoleText anymore. Update script to accept both traditional builds 
> (maven output in consoleTest, for flakies runner job)  and yetus builds 
> (maven output in artifacts).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19336) Improve rsgroup to allow assign all tables within a specified namespace by only writing namespace

2017-11-29 Thread xinxin fan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272264#comment-16272264
 ] 

xinxin fan commented on HBASE-19336:


Thanks for review [~zghaobac], fix rubocop result in patch V5

> Improve rsgroup to allow assign all tables within a specified namespace by 
> only writing namespace
> -
>
> Key: HBASE-19336
> URL: https://issues.apache.org/jira/browse/HBASE-19336
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Affects Versions: 2.0.0-alpha-4
>Reporter: xinxin fan
>Assignee: xinxin fan
> Attachments: HBASE-19336-master-V2.patch, 
> HBASE-19336-master-V3.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V4.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V5.patch, HBASE-19336-master.patch
>
>
> Currently, use can only assign tables within a namespace from one group to 
> another by writing all table names in move_tables_rsgroup command. Allowing 
> to assign all tables within a specifed namespace by only wirting namespace 
> name is useful.
> Usage as follows:
> {code:java}
> hbase(main):055:0> move_namespaces_rsgroup 'dest_rsgroup',['ns1']
> Took 2.2211 seconds
> {code}
> {code:java}
> hbase(main):051:0* move_servers_namespaces_rsgroup 
> 'dest_rsgroup',['hbase39.lt.163.org:60020'],['ns1','ns2']
> Took 15.3710 seconds 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19385) [1.3] TestReplicator failed 1.3 nightly

2017-11-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19385:
--
Fix Version/s: 1.4.1
   2.0.0

> [1.3] TestReplicator failed 1.3 nightly
> ---
>
> Key: HBASE-19385
> URL: https://issues.apache.org/jira/browse/HBASE-19385
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.3.2, 1.4.1
>
> Attachments: HBASE-19385.branch-1.3.001.patch
>
>
> TestReplicator failed 1.3 nightly. Running it local, it fails sometimes. 
> Complaint is illegalmonitorstate  and indeed, locking around latch is unsafe. 
> Fixing this, I can't get it to fail locally anymore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19382) Update report-flakies.py script to handle yetus builds

2017-11-29 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272254#comment-16272254
 ] 

Appy edited comment on HBASE-19382 at 11/30/17 7:16 AM:


Nope.
Nightly jobs don't use this script. This job uses it  - 
https://builds.apache.org/job/HBase-Find-Flaky-Tests/.
So there are two things:
1) It's use is independent of branch since it just uses jenkins jobs' output 
for generating report. We can point it to any job, 1.2, 1.3, master, etc.
2) Right now, we have flaky list only for master, so that job goes to 
https://builds.apache.org/job/HBase Nightly/job/master/ (--urls param) to 
collect test results for master. Setting it up for another branch will require 
setting up another set of these two jobs - 
[HBase-Find-Flaky-Tests|https://builds.apache.org/job/HBase-Find-Flaky-Tests] 
and [HBASE-Flaky-Tests|https://builds.apache.org/job/HBASE-Flaky-Tests/].





was (Author: appy):
Nope.
Nightly jobs don't use this script. This job uses it  - 
https://builds.apache.org/job/HBase-Find-Flaky-Tests/.
Right now, we have flaky list for only master, so that job goes to 
https://builds.apache.org/job/HBase Nightly/job/master/ (--urls param) to 
collect test results for master.




> Update report-flakies.py script to handle yetus builds
> --
>
> Key: HBASE-19382
> URL: https://issues.apache.org/jira/browse/HBASE-19382
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19382.master.001.patch
>
>
> With move to new nightly build which uses yetus 
> (https://builds.apache.org/job/HBase%20Nightly/job/master/), current 
> report-flakies.py is not able to build test report since maven output is not 
> in consoleText anymore. Update script to accept both traditional builds 
> (maven output in consoleTest, for flakies runner job)  and yetus builds 
> (maven output in artifacts).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19385) [1.3] TestReplicator failed 1.3 nightly

2017-11-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272259#comment-16272259
 ] 

stack commented on HBASE-19385:
---

Pushed to branch-1.3, branch-1, 2, and master. Leaving open to push to 1.4 when 
ready.

> [1.3] TestReplicator failed 1.3 nightly
> ---
>
> Key: HBASE-19385
> URL: https://issues.apache.org/jira/browse/HBASE-19385
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 1.3.2
>
> Attachments: HBASE-19385.branch-1.3.001.patch
>
>
> TestReplicator failed 1.3 nightly. Running it local, it fails sometimes. 
> Complaint is illegalmonitorstate  and indeed, locking around latch is unsafe. 
> Fixing this, I can't get it to fail locally anymore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19336) Improve rsgroup to allow assign all tables within a specified namespace by only writing namespace

2017-11-29 Thread xinxin fan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xinxin fan updated HBASE-19336:
---
Attachment: HBASE-19336-master-V5.patch

> Improve rsgroup to allow assign all tables within a specified namespace by 
> only writing namespace
> -
>
> Key: HBASE-19336
> URL: https://issues.apache.org/jira/browse/HBASE-19336
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup
>Affects Versions: 2.0.0-alpha-4
>Reporter: xinxin fan
>Assignee: xinxin fan
> Attachments: HBASE-19336-master-V2.patch, 
> HBASE-19336-master-V3.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V4.patch, HBASE-19336-master-V4.patch, 
> HBASE-19336-master-V5.patch, HBASE-19336-master.patch
>
>
> Currently, use can only assign tables within a namespace from one group to 
> another by writing all table names in move_tables_rsgroup command. Allowing 
> to assign all tables within a specifed namespace by only wirting namespace 
> name is useful.
> Usage as follows:
> {code:java}
> hbase(main):055:0> move_namespaces_rsgroup 'dest_rsgroup',['ns1']
> Took 2.2211 seconds
> {code}
> {code:java}
> hbase(main):051:0* move_servers_namespaces_rsgroup 
> 'dest_rsgroup',['hbase39.lt.163.org:60020'],['ns1','ns2']
> Took 15.3710 seconds 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19386) HBase UnsafeAvailChecker returns false on Arm64

2017-11-29 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272258#comment-16272258
 ] 

Ted Yu commented on HBASE-19386:


lgtm

> HBase UnsafeAvailChecker returns false on Arm64
> ---
>
> Key: HBASE-19386
> URL: https://issues.apache.org/jira/browse/HBASE-19386
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yuqi Gu
>Priority: Minor
> Attachments: HBASE-19386.patch
>
>
> Arm64v8 supports unaligned access .
> But UnsafeAvailChecker returns false due to a JDK bug.
> The false of UnsafeAvailChecker return also causes the HBase Unit 
> tests(FuzzyRowFilter, TestFuzzyRowFilterEndToEnd, 
> TestFuzzyRowAndColumnRangeFilter) failures. 
> Enable Arm64 unaligned support by providing a hard-code workaround for the 
> JDK bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19386) HBase UnsafeAvailChecker returns false on Arm64

2017-11-29 Thread Yuqi Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Gu updated HBASE-19386:

Attachment: HBASE-19386.patch

> HBase UnsafeAvailChecker returns false on Arm64
> ---
>
> Key: HBASE-19386
> URL: https://issues.apache.org/jira/browse/HBASE-19386
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yuqi Gu
>Priority: Minor
> Attachments: HBASE-19386.patch
>
>
> Arm64v8 supports unaligned access .
> But UnsafeAvailChecker returns false due to a JDK bug.
> The false of UnsafeAvailChecker return also causes the HBase Unit 
> tests(FuzzyRowFilter, TestFuzzyRowFilterEndToEnd, 
> TestFuzzyRowAndColumnRangeFilter) failures. 
> Enable Arm64 unaligned support by providing a hard-code workaround for the 
> JDK bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19382) Update report-flakies.py script to handle yetus builds

2017-11-29 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272254#comment-16272254
 ] 

Appy commented on HBASE-19382:
--

Nope.
Nightly jobs don't use this script. This job uses it  - 
https://builds.apache.org/job/HBase-Find-Flaky-Tests/.
Right now, we have flaky list for only master, so that job goes to 
https://builds.apache.org/job/HBase Nightly/job/master/ (--urls param) to 
collect test results for master.




> Update report-flakies.py script to handle yetus builds
> --
>
> Key: HBASE-19382
> URL: https://issues.apache.org/jira/browse/HBASE-19382
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19382.master.001.patch
>
>
> With move to new nightly build which uses yetus 
> (https://builds.apache.org/job/HBase%20Nightly/job/master/), current 
> report-flakies.py is not able to build test report since maven output is not 
> in consoleText anymore. Update script to accept both traditional builds 
> (maven output in consoleTest, for flakies runner job)  and yetus builds 
> (maven output in artifacts).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19385) [1.3] TestReplicator failed 1.3 nightly

2017-11-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19385:
--
Attachment: HBASE-19385.branch-1.3.001.patch

> [1.3] TestReplicator failed 1.3 nightly
> ---
>
> Key: HBASE-19385
> URL: https://issues.apache.org/jira/browse/HBASE-19385
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 1.3.2
>
> Attachments: HBASE-19385.branch-1.3.001.patch
>
>
> TestReplicator failed 1.3 nightly. Running it local, it fails sometimes. 
> Complaint is illegalmonitorstate  and indeed, locking around latch is unsafe. 
> Fixing this, I can't get it to fail locally anymore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19385) [1.3] TestReplicator failed 1.3 nightly

2017-11-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272250#comment-16272250
 ] 

stack commented on HBASE-19385:
---

It failed again. Harder to repro but failed again. Making counter atomic seems 
to have helped. We were missing a replication that had actually being sent.  
Let me commit what I have since now a bunch of local runs don't fail anymore.

> [1.3] TestReplicator failed 1.3 nightly
> ---
>
> Key: HBASE-19385
> URL: https://issues.apache.org/jira/browse/HBASE-19385
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 1.3.2
>
>
> TestReplicator failed 1.3 nightly. Running it local, it fails sometimes. 
> Complaint is illegalmonitorstate  and indeed, locking around latch is unsafe. 
> Fixing this, I can't get it to fail locally anymore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19386) HBase UnsafeAvailChecker returns false on Arm64

2017-11-29 Thread Yuqi Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Gu updated HBASE-19386:

Status: Patch Available  (was: Open)

> HBase UnsafeAvailChecker returns false on Arm64
> ---
>
> Key: HBASE-19386
> URL: https://issues.apache.org/jira/browse/HBASE-19386
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yuqi Gu
>Priority: Minor
>
> Arm64v8 supports unaligned access .
> But UnsafeAvailChecker returns false due to a JDK bug.
> The false of UnsafeAvailChecker return also causes the HBase Unit 
> tests(FuzzyRowFilter, TestFuzzyRowFilterEndToEnd, 
> TestFuzzyRowAndColumnRangeFilter) failures. 
> Enable Arm64 unaligned support by providing a hard-code workaround for the 
> JDK bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19386) HBase UnsafeAvailChecker returns false on Arm64

2017-11-29 Thread Yuqi Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272247#comment-16272247
 ] 

Yuqi Gu commented on HBASE-19386:
-

GH PR: https://github.com/apache/hbase/pull/67

> HBase UnsafeAvailChecker returns false on Arm64
> ---
>
> Key: HBASE-19386
> URL: https://issues.apache.org/jira/browse/HBASE-19386
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yuqi Gu
>Priority: Minor
>
> Arm64v8 supports unaligned access .
> But UnsafeAvailChecker returns false due to a JDK bug.
> The false of UnsafeAvailChecker return also causes the HBase Unit 
> tests(FuzzyRowFilter, TestFuzzyRowFilterEndToEnd, 
> TestFuzzyRowAndColumnRangeFilter) failures. 
> Enable Arm64 unaligned support by providing a hard-code workaround for the 
> JDK bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19367) Refactoring in RegionStates, and RSProcedureDispatcher

2017-11-29 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19367:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Refactoring in RegionStates, and RSProcedureDispatcher
> --
>
> Key: HBASE-19367
> URL: https://issues.apache.org/jira/browse/HBASE-19367
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19367.master.001.patch, 
> HBASE-19367.master.002.patch, HBASE-19367.master.003.patch, 
> HBASE-19367.master.004.patch
>
>
> Working on a bug fix, was in these parts for first time to understand new AM 
> and trying to make sense of things. Did a few improvements on the way.
> - Adding javadoc comments
> - Bug: ServerStateNode#regions is HashSet but there's no synchronization to 
> prevent concurrent addRegion/removeRegion. Let's use concurrent set instead.
> - Use getRegionsInTransitionCount() directly to avoid instead of 
> getRegionsInTransition().size() because the latter copies everything into a 
> new array - what a waste for just the size.
> - There's mixed use of getRegionNode and getRegionStateNode for same return 
> type - RegionStateNode. Changing everything to getRegionStateNode. Similarly 
> rename other *RegionNode() fns to *RegionStateNode().
> - RegionStateNode#transitionState() return value is useless since it always 
> returns it's first param.
> - Other minor improvements



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19386) HBase UnsafeAvailChecker returns false on Arm64

2017-11-29 Thread Yuqi Gu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Gu updated HBASE-19386:

Description: 
Arm64v8 supports unaligned access .
But UnsafeAvailChecker returns false due to a JDK bug.
The false of UnsafeAvailChecker return also causes the HBase Unit 
tests(FuzzyRowFilter, TestFuzzyRowFilterEndToEnd, 
TestFuzzyRowAndColumnRangeFilter) failures. 
Enable Arm64 unaligned support by providing a hard-code workaround for the JDK 
bug.


  was:
Arm64v8 supports unaligned access .
But UnsafeAvailChecker returns false due to a JDK bug.
The false of UnsafeAvailChecker return also causes the HBase Unit 
tests(FuzzyRowFilter, TestFuzzyRowFilterEndToEnd, 
TestFuzzyRowAndColumnRangeFilter) failures. 
Enable Arm64 unaligned access by providing a hard-code workaround for the JDK 
bug.



> HBase UnsafeAvailChecker returns false on Arm64
> ---
>
> Key: HBASE-19386
> URL: https://issues.apache.org/jira/browse/HBASE-19386
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Yuqi Gu
>Priority: Minor
>
> Arm64v8 supports unaligned access .
> But UnsafeAvailChecker returns false due to a JDK bug.
> The false of UnsafeAvailChecker return also causes the HBase Unit 
> tests(FuzzyRowFilter, TestFuzzyRowFilterEndToEnd, 
> TestFuzzyRowAndColumnRangeFilter) failures. 
> Enable Arm64 unaligned support by providing a hard-code workaround for the 
> JDK bug.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19382) Update report-flakies.py script to handle yetus builds

2017-11-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272244#comment-16272244
 ] 

stack commented on HBASE-19382:
---

Don't we need this in all branches? [~appy]? The 1.2 nightly does a checkout of 
1.2 and uses the dev-support from 1.2.

> Update report-flakies.py script to handle yetus builds
> --
>
> Key: HBASE-19382
> URL: https://issues.apache.org/jira/browse/HBASE-19382
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19382.master.001.patch
>
>
> With move to new nightly build which uses yetus 
> (https://builds.apache.org/job/HBase%20Nightly/job/master/), current 
> report-flakies.py is not able to build test report since maven output is not 
> in consoleText anymore. Update script to accept both traditional builds 
> (maven output in consoleTest, for flakies runner job)  and yetus builds 
> (maven output in artifacts).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19367) Refactoring in RegionStates, and RSProcedureDispatcher

2017-11-29 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19367:
-
Fix Version/s: 2.0.0-beta-1

> Refactoring in RegionStates, and RSProcedureDispatcher
> --
>
> Key: HBASE-19367
> URL: https://issues.apache.org/jira/browse/HBASE-19367
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19367.master.001.patch, 
> HBASE-19367.master.002.patch, HBASE-19367.master.003.patch, 
> HBASE-19367.master.004.patch
>
>
> Working on a bug fix, was in these parts for first time to understand new AM 
> and trying to make sense of things. Did a few improvements on the way.
> - Adding javadoc comments
> - Bug: ServerStateNode#regions is HashSet but there's no synchronization to 
> prevent concurrent addRegion/removeRegion. Let's use concurrent set instead.
> - Use getRegionsInTransitionCount() directly to avoid instead of 
> getRegionsInTransition().size() because the latter copies everything into a 
> new array - what a waste for just the size.
> - There's mixed use of getRegionNode and getRegionStateNode for same return 
> type - RegionStateNode. Changing everything to getRegionStateNode. Similarly 
> rename other *RegionNode() fns to *RegionStateNode().
> - RegionStateNode#transitionState() return value is useless since it always 
> returns it's first param.
> - Other minor improvements



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19382) Update report-flakies.py script to handle yetus builds

2017-11-29 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19382:
-
Fix Version/s: 2.0.0-beta-1

> Update report-flakies.py script to handle yetus builds
> --
>
> Key: HBASE-19382
> URL: https://issues.apache.org/jira/browse/HBASE-19382
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19382.master.001.patch
>
>
> With move to new nightly build which uses yetus 
> (https://builds.apache.org/job/HBase%20Nightly/job/master/), current 
> report-flakies.py is not able to build test report since maven output is not 
> in consoleText anymore. Update script to accept both traditional builds 
> (maven output in consoleTest, for flakies runner job)  and yetus builds 
> (maven output in artifacts).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-4030) LoadIncrementalHFiles fails with FileNotFoundException

2017-11-29 Thread zhang gang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272243#comment-16272243
 ] 

zhang gang commented on HBASE-4030:
---

my question is 
at first time,region server has open and rename hfile successfully, why reopen 
it at 10x open?

> LoadIncrementalHFiles fails with FileNotFoundException
> --
>
> Key: HBASE-4030
> URL: https://issues.apache.org/jira/browse/HBASE-4030
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.90.1
> Environment: CDH3bu on Ubuntu 4.4.3
>Reporter: Adam Phelps
>
> -- We've been seeing intermittent failures of calls to LoadIncrementalHFiles. 
>  When this happens the node that made the call will see a 
> FileNotFoundException such as this:
> 2011-06-23 15:47:34.379566500 java.net.SocketTimeoutException: Call to 
> s8.XXX/67.215.90.38:60020 failed on socket timeout exception: 
> java.net.SocketTi
> meoutException: 6 millis timeout while waiting for channel to be ready 
> for read. ch : java.nio.channels.SocketChannel[connected 
> local=/67.215.90.51:51605 remo
> te=s8.XXX/67.215.90.38:60020]
> 2011-06-23 15:47:34.379570500 java.io.FileNotFoundException: 
> java.io.FileNotFoundException: File does not exist: 
> /hfiles/2011/06/23/14/domainsranked/TopDomainsRan
> k.r3v5PRvK/handling/3557032074765091256
> 2011-06-23 15:47:34.379573500   at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1602)
> 2011-06-23 15:47:34.379573500   at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.(DFSClient.java:1593)
> -- Over on the regionserver that was loading this we see that it attempted to 
> load and hit a 60 second timeout:
> 2011-06-23 15:45:54,634 INFO org.apache.hadoop.hbase.regionserver.Store: 
> Validating hfile at 
> hdfs://namenode.XXX/hfiles/2011/06/23/14/domainsranked/TopDomainsRank.r3v5PRvK/handling/3557032074765091256
>  for inclusion in store handling region 
> domainsranked,368449:2011/0/03/23:category::com.zynga.static.fishville.facebook,1305890318961.d4925aca7852bed32613a509215d42b
> 8.
> ...
> 2011-06-23 15:46:54,639 INFO org.apache.hadoop.hdfs.DFSClient: Failed to 
> connect to /67.215.90.38:50010, add to deadNodes and continue
> java.net.SocketTimeoutException: 6 millis timeout while waiting for 
> channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
> local=/67.215.90.38:42199 remote=/67.215.90.38:50010]
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> at java.io.DataInputStream.readShort(DataInputStream.java:295)
> -- We suspect this particular problem is a resource contention issue on our 
> side.  However, the loading process proceeds to rename the file despite the 
> failure:
> 2011-06-23 15:46:54,657 INFO org.apache.hadoop.hbase.regionserver.Store: 
> Renaming bulk load file 
> hdfs://namenode.XXX/hfiles/2011/06/23/14/domainsranked/TopDomainsRank.r3v5PRvK/handling/3557032074765091256
>  to 
> hdfs://namenode.XXX:8020/hbase/domainsranked/d4925aca7852bed32613a509215d42b8/handling/3615917062821145533
> -- And then the LoadIncrementalHFiles tries to load the hfile again:
> 2011-06-23 15:46:55,684 INFO org.apache.hadoop.hbase.regionserver.Store: 
> Validating hfile at 
> hdfs://namenode.XXX/hfiles/2011/06/23/14/domainsranked/TopDomainsRank.r3v5PRvK/handling/3557032074765091256
>  for inclusion in store handling region 
> domainsranked,368449:2011/05/03/23:category::com.zynga.static.fishville.facebook,1305890318961.d4925aca7852bed32613a509215d42b8.
> 2011-06-23 15:46:55,685 DEBUG org.apache.hadoop.ipc.HBaseServer: IPC Server 
> handler 147 on 60020, call 
> bulkLoadHFile(hdfs://namenode.XXX/hfiles/2011/06/23/14/domainsranked/TopDomainsRank.r3v5PRvK/handling/3557032074765091256,
>  [B@4224508b, [B@5e23f799) from 67.215.90.51:51856: error: 
> java.io.FileNotFoundException: File does not exist: 
> /hfiles/2011/06/23/14/domainsranked/TopDomainsRank.r3v5PRvK/handling/3557032074765091256
> -- This eventually leads to the load command failing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-19382) Update report-flakies.py script to handle yetus builds

2017-11-29 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy resolved HBASE-19382.
--
Resolution: Fixed

> Update report-flakies.py script to handle yetus builds
> --
>
> Key: HBASE-19382
> URL: https://issues.apache.org/jira/browse/HBASE-19382
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-19382.master.001.patch
>
>
> With move to new nightly build which uses yetus 
> (https://builds.apache.org/job/HBase%20Nightly/job/master/), current 
> report-flakies.py is not able to build test report since maven output is not 
> in consoleText anymore. Update script to accept both traditional builds 
> (maven output in consoleTest, for flakies runner job)  and yetus builds 
> (maven output in artifacts).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19382) Update report-flakies.py script to handle yetus builds

2017-11-29 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272242#comment-16272242
 ] 

Appy commented on HBASE-19382:
--

Pushed to master. Thanks stack.

> Update report-flakies.py script to handle yetus builds
> --
>
> Key: HBASE-19382
> URL: https://issues.apache.org/jira/browse/HBASE-19382
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-19382.master.001.patch
>
>
> With move to new nightly build which uses yetus 
> (https://builds.apache.org/job/HBase%20Nightly/job/master/), current 
> report-flakies.py is not able to build test report since maven output is not 
> in consoleText anymore. Update script to accept both traditional builds 
> (maven output in consoleTest, for flakies runner job)  and yetus builds 
> (maven output in artifacts).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19386) HBase UnsafeAvailChecker returns false on Arm64

2017-11-29 Thread Yuqi Gu (JIRA)
Yuqi Gu created HBASE-19386:
---

 Summary: HBase UnsafeAvailChecker returns false on Arm64
 Key: HBASE-19386
 URL: https://issues.apache.org/jira/browse/HBASE-19386
 Project: HBase
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Yuqi Gu
Priority: Minor


Arm64v8 supports unaligned access .
But UnsafeAvailChecker returns false due to a JDK bug.
The false of UnsafeAvailChecker return also causes the HBase Unit 
tests(FuzzyRowFilter, TestFuzzyRowFilterEndToEnd, 
TestFuzzyRowAndColumnRangeFilter) failures. 
Enable Arm64 unaligned access by providing a hard-code workaround for the JDK 
bug.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19356) Provide delegators and base implementation for Phoenix implemented interfaces

2017-11-29 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272236#comment-16272236
 ] 

Anoop Sam John commented on HBASE-19356:


Good points by Appy. I got what he say abt the diff wrt the RegionObserver kind 
of interface with def impl and issue on delegatee model with RegionScanner. 
We have postScannerOpen hook which takes RegionScanner object which the core 
created and allows the CP User to return a RegionScanner object. This will be 
mostly (or for sure) a wrapper instance.  So CP has to impl all methods and 
delgate calls.
Ya we wont have ref to delegates if give it for CPs. That arg , read it
But for these kind of cp hooks, can we make a change? 
RegionScanner postScannerOpen(ObserverContext c, 
Scan scan, RegionScanner s)   ->
RegionScannerDelegator 
postScannerOpen(ObserverContext c, Scan scan, 
RegionScanner s)
The CP itself returns a wrapper/Delegator type which we provide. This takes the 
original RegionScanner and just delegates calls for all APIs.  If user need a 
wrapping, what he can do is create a new wrapper extending the 
RegionScannerWrapper and overriding the needed method. And any way they have to 
call delegatee also in their impl at the appropriate place.
So the CP contact itself says it clearly that the hook allows the user wrap the 
original object. Not create a fresh new RegionScanner impl ignoring the one 
core created.

> Provide delegators and base implementation for Phoenix implemented interfaces
> -
>
> Key: HBASE-19356
> URL: https://issues.apache.org/jira/browse/HBASE-19356
> Project: HBase
>  Issue Type: Improvement
>Reporter: James Taylor
>
> Many of the changes Phoenix needs to make for various branches to support 
> different versions of HBase are due to new methods being added to interfaces. 
> Often times Phoenix can  use a noop or simply needs to add the new method to 
> it's delegate implementor. It'd be helpful if HBase provided base 
> implementations and delegates that Phoenix could use instead. Here are some 
> that come to mind:
> - RegionScanner
> - HTableInterface (Table interface now?)
> - RegionObserver
> There are likely others that [~rajeshbabu], [~an...@apache.org], and 
> [~elserj] would remember.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19367) Refactoring in RegionStates, and RSProcedureDispatcher

2017-11-29 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272231#comment-16272231
 ] 

Appy commented on HBASE-19367:
--

Wohoo...
Committing to master and branch-2. Will fix 4/5 checkstyles issues on commit. 
The last one is preexisting and i don't want to touch right now (requires 
changing loop).
Thanks for review [~stack].

> Refactoring in RegionStates, and RSProcedureDispatcher
> --
>
> Key: HBASE-19367
> URL: https://issues.apache.org/jira/browse/HBASE-19367
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19367.master.001.patch, 
> HBASE-19367.master.002.patch, HBASE-19367.master.003.patch, 
> HBASE-19367.master.004.patch
>
>
> Working on a bug fix, was in these parts for first time to understand new AM 
> and trying to make sense of things. Did a few improvements on the way.
> - Adding javadoc comments
> - Bug: ServerStateNode#regions is HashSet but there's no synchronization to 
> prevent concurrent addRegion/removeRegion. Let's use concurrent set instead.
> - Use getRegionsInTransitionCount() directly to avoid instead of 
> getRegionsInTransition().size() because the latter copies everything into a 
> new array - what a waste for just the size.
> - There's mixed use of getRegionNode and getRegionStateNode for same return 
> type - RegionStateNode. Changing everything to getRegionStateNode. Similarly 
> rename other *RegionNode() fns to *RegionStateNode().
> - RegionStateNode#transitionState() return value is useless since it always 
> returns it's first param.
> - Other minor improvements



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17745) Support short circuit connection for master services

2017-11-29 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272229#comment-16272229
 ] 

Yu Li commented on HBASE-17745:
---

bq. We have to expose it with Public InterfaceAudience?
Not necessarily. Sorry but I cannot remember why set it to IA.Public before, 
and after a second look it's more like an internal class and IA.Private may be 
more proper.

> Support short circuit connection for master services
> 
>
> Key: HBASE-17745
> URL: https://issues.apache.org/jira/browse/HBASE-17745
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-17745.patch, HBASE-17745.v2.patch, 
> HBASE-17745.v2.trival.patch, HBASE-17745.v2.trival.patch, HBASE-17745.v3.patch
>
>
> As titled, now we have short circuit connection, but no short circuit for 
> master services, and we propose to support it in this JIRA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19379) TestEndToEndSplitTransaction fails with NPE

2017-11-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272223#comment-16272223
 ] 

Hudson commented on HBASE-19379:


FAILURE: Integrated in Jenkins build HBase-1.5 #175 (See 
[https://builds.apache.org/job/HBase-1.5/175/])
HBASE-19379 TestEndToEndSplitTransaction fails with NPE (apurtell: rev 
f3614f20c00a455dd59d6ca46abaa00123b946f9)
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java
Revert "HBASE-19379 TestEndToEndSplitTransaction fails with NPE" (apurtell: rev 
0b704d4815892963e7355aa7a587825943b107a0)
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java
HBASE-19379 TestEndToEndSplitTransaction fails with NPE (apurtell: rev 
cf34adaf5ef3ad6b89d57e1a6adb874fbe1cfc68)
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java


> TestEndToEndSplitTransaction fails with NPE
> ---
>
> Key: HBASE-19379
> URL: https://issues.apache.org/jira/browse/HBASE-19379
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 1.4.0
>
> Attachments: HBASE-19379-branch-1.patch, HBASE-19379-branch-1.patch
>
>
> TestEndToEndSplitTransaction
> ---
> Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 44.71 s <<< 
> FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
> testFromClientSideWhileSplitting(org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction)
>   Time elapsed: 18.913 s  <<< ERROR!
> java.lang.NullPointerException



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-11-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272220#comment-16272220
 ] 

stack commented on HBASE-18946:
---

This last patch is very nice. Clean.

What you mean by this sir: "But the problem is no longer we call 
roundrobinAssignment from LB instead we will call retainAssignment() only. This 
happens only when replicas are created." Do we need to add more  knowledge of 
replicas to the LB?

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.patch, HBASE-18946.patch, 
> HBASE-18946_2.patch, HBASE-18946_2.patch, HBASE-18946_simple_7.patch, 
> HBASE-18946_simple_8.patch, TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17918) document serial replication

2017-11-29 Thread Yi Mei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Mei updated HBASE-17918:
---
Status: Patch Available  (was: Open)

> document serial replication
> ---
>
> Key: HBASE-17918
> URL: https://issues.apache.org/jira/browse/HBASE-17918
> Project: HBase
>  Issue Type: Task
>  Components: documentation, Replication
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Sean Busbey
>Assignee: Yi Mei
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17918.v1.patch
>
>
> It looks like HBASE-9465 addresses one of the major flaws in our existing 
> replication (namely that order of delivery is not assured). All I see in the 
> reference guide is a note on {{hbase.serial.replication.waitingMs}}. Instead 
> we should cover this in the replication section, especially given that we 
> call out the order of delivery limitation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17918) document serial replication

2017-11-29 Thread Yi Mei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Mei updated HBASE-17918:
---
Attachment: HBASE-17918.v1.patch

> document serial replication
> ---
>
> Key: HBASE-17918
> URL: https://issues.apache.org/jira/browse/HBASE-17918
> Project: HBase
>  Issue Type: Task
>  Components: documentation, Replication
>Affects Versions: 2.0.0, 3.0.0
>Reporter: Sean Busbey
>Assignee: Yi Mei
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17918.v1.patch
>
>
> It looks like HBASE-9465 addresses one of the major flaws in our existing 
> replication (namely that order of delivery is not assured). All I see in the 
> reference guide is a note on {{hbase.serial.replication.waitingMs}}. Instead 
> we should cover this in the replication section, especially given that we 
> call out the order of delivery limitation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19383) [1.2] java.lang.AssertionError: expected:<2> but was:<1> at org.apache.hadoop.hbase.TestChoreService.testTriggerNowFailsWhenNotScheduled(TestChoreService.java:707)

2017-11-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272216#comment-16272216
 ] 

Hudson commented on HBASE-19383:


FAILURE: Integrated in Jenkins build HBase-2.0 #940 (See 
[https://builds.apache.org/job/HBase-2.0/940/])
HBASE-19383 [1.2] java.lang.AssertionError: expected:<2> but was:<1> at (stack: 
rev 94197099952cd09cd36103d79a9dd4e34c191556)
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/TestChoreService.java


> [1.2] java.lang.AssertionError: expected:<2> but was:<1>  at 
> org.apache.hadoop.hbase.TestChoreService.testTriggerNowFailsWhenNotScheduled(TestChoreService.java:707)
> 
>
> Key: HBASE-19383
> URL: https://issues.apache.org/jira/browse/HBASE-19383
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 1.3.2, 1.4.1, 1.2.7, 2.0.0-beta-2
>
> Attachments: 19383.txt
>
>
> A test that is based on timers that asserts hard numbers about how many times 
> something is called. I'm just going to remove it. It killed my 1.2 nightly 
> test run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19359) Revisit the default config of hbase client retries number

2017-11-29 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272215#comment-16272215
 ] 

Guanghao Zhang commented on HBASE-19359:


bq. Pushed to branch-2 and master.
Thanks, sir. :-)

> Revisit the default config of hbase client retries number
> -
>
> Key: HBASE-19359
> URL: https://issues.apache.org/jira/browse/HBASE-19359
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19359.master.001.patch, 
> HBASE-19359.master.001.patch, HBASE-19359.master.001.patch
>
>
> This should be sub-task of HBASE-19148. As the retries number effect too many 
> unit tests. So I open this issue to see the Hadoop QA result.
> The default value of hbase.client.retries.number is 35. Plan to reduce this 
> to 10.
> And for server side, the default hbase.client.serverside.retries.multiplier 
> is 10. So the server side retries number is 35 * 10 = 350. It is too big! 
> Plan to reduce hbase.client.serverside.retries.multiplier to 3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19367) Refactoring in RegionStates, and RSProcedureDispatcher

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272214#comment-16272214
 ] 

Hadoop QA commented on HBASE-19367:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
30s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} hbase-client: The patch generated 0 new + 99 
unchanged - 3 fixed = 99 total (was 102) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} hbase-zookeeper: The patch generated 0 new + 61 
unchanged - 1 fixed = 61 total (was 62) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} The patch hbase-procedure passed checkstyle {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
58s{color} | {color:red} hbase-server: The patch generated 5 new + 421 
unchanged - 4 fixed = 426 total (was 425) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} The patch hbase-rsgroup passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
35s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
51m 38s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
55s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hbase-zookeeper in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m  
4s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m 
49s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | 

[jira] [Commented] (HBASE-19385) [1.3] TestReplicator failed 1.3 nightly

2017-11-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272203#comment-16272203
 ] 

stack commented on HBASE-19385:
---

See 
https://builds.apache.org/view/H-L/view/HBase/job/HBase%20Nightly/job/branch-1.3/148/testReport/junit/org.apache.hadoop.hbase.replication.regionserver/TestReplicator/yetus_jdk8_checks___testReplicatorBatching/

Here is exception

yetus jdk8 checks / 
org.apache.hadoop.hbase.replication.regionserver.TestReplicator.testReplicatorBatching

Failing for the past 1 build (Since Failed#148 )
Took 1 min 5 sec.
add description
Error Message

Waiting timed out after [60,000] msec We waited too long for expected 
replication of 10 entries
Stacktrace

junit.framework.AssertionFailedError: Waiting timed out after [60,000] msec We 
waited too long for expected replication of 10 entries
at 
org.apache.hadoop.hbase.replication.regionserver.TestReplicator.testReplicatorBatching(TestReplicator.java:95)

> [1.3] TestReplicator failed 1.3 nightly
> ---
>
> Key: HBASE-19385
> URL: https://issues.apache.org/jira/browse/HBASE-19385
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 1.3.2
>
>
> TestReplicator failed 1.3 nightly. Running it local, it fails sometimes. 
> Complaint is illegalmonitorstate  and indeed, locking around latch is unsafe. 
> Fixing this, I can't get it to fail locally anymore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19385) [1.3] TestReplicator failed 1.3 nightly

2017-11-29 Thread stack (JIRA)
stack created HBASE-19385:
-

 Summary: [1.3] TestReplicator failed 1.3 nightly
 Key: HBASE-19385
 URL: https://issues.apache.org/jira/browse/HBASE-19385
 Project: HBase
  Issue Type: Bug
  Components: test
Reporter: stack
Assignee: stack
 Fix For: 1.3.2


TestReplicator failed 1.3 nightly. Running it local, it fails sometimes. 
Complaint is illegalmonitorstate  and indeed, locking around latch is unsafe. 
Fixing this, I can't get it to fail locally anymore.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-18645) Loads of tests timing out....

2017-11-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-18645.
---
Resolution: Duplicate
  Assignee: Chia-Ping Tsai

Assigning Chia-Ping Tsai because he stayed on top of it.

Resolving as duplicate/subsumed by HBASE-19204/HBASE-19354.

Thanks.

> Loads of tests timing out
> -
>
> Key: HBASE-18645
> URL: https://issues.apache.org/jira/browse/HBASE-18645
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>Assignee: Chia-Ping Tsai
>Priority: Critical
> Attachments: HBASE-18645.master.001.patch, 
> HBASE-18645.master.001.patch
>
>
> Whats up? Why are tests mostly timing out? When did it start? I can't seem to 
> make it happen locally so tough doing a bisect.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272191#comment-16272191
 ] 

stack commented on HBASE-19344:
---

Go for it. I haven't had time to do async test today but will be on it 
tomorrow. Thanks (I want to look at hadoop 2.7 again... and why verify in WALPE 
fails with asyncfs).

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0-beta-1
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0
>
> Attachments: HBASE-19344-branch-ycsb-1.png, 
> HBASE-19344-branch.ycsb.png, HBASE-19344-branch.ycsb.png, 
> HBASE-19344-branch2.patch, HBASE-19344-branch2.patch.2.POC, 
> wal-1-test-result.png, wal-8-test-result.png, 
> ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-29 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272189#comment-16272189
 ] 

Duo Zhang commented on HBASE-19344:
---

Then let me commit (both HBASE-19346 and HBASE-19344) and let's back to 
HBASE-16890 to begin the next round of testing.

[~ram_krish] [~stack] FYI.

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0-beta-1
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0
>
> Attachments: HBASE-19344-branch-ycsb-1.png, 
> HBASE-19344-branch.ycsb.png, HBASE-19344-branch.ycsb.png, 
> HBASE-19344-branch2.patch, HBASE-19344-branch2.patch.2.POC, 
> wal-1-test-result.png, wal-8-test-result.png, 
> ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19056) TestCompactionInDeadRegionServer is top of the flakies charts!

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272185#comment-16272185
 ] 

Hadoop QA commented on HBASE-19056:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
1s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.6.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
51s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
30s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
58m 44s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}143m 
13s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}224m 37s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19056 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899928/19056.v6.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 0161ed6c2f08 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / abb535eef6 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10125/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10125/console |
| Powered by | Apache 

[jira] [Commented] (HBASE-19056) TestCompactionInDeadRegionServer is top of the flakies charts!

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272184#comment-16272184
 ] 

Hadoop QA commented on HBASE-19056:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue}  0m  
2s{color} | {color:blue} The patch file was not named according to hbase's 
naming conventions. Please see 
https://yetus.apache.org/documentation/0.6.0/precommit-patchnames for 
instructions. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 0s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
 4s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
57s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
53m 18s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 90m 
27s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}164m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19056 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899935/19056.v7.txt |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 490e6871ea56 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / abb535eef6 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10129/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10129/console |
| Powered by | Apache 

[jira] [Commented] (HBASE-19379) TestEndToEndSplitTransaction fails with NPE

2017-11-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272183#comment-16272183
 ] 

Hudson commented on HBASE-19379:


FAILURE: Integrated in Jenkins build HBase-1.4 #1034 (See 
[https://builds.apache.org/job/HBase-1.4/1034/])
Revert "HBASE-19379 TestEndToEndSplitTransaction fails with NPE" (apurtell: rev 
ef12ee48045ad39b5cb99b9460f02d5c2a98fa57)
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java


> TestEndToEndSplitTransaction fails with NPE
> ---
>
> Key: HBASE-19379
> URL: https://issues.apache.org/jira/browse/HBASE-19379
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 1.4.0
>
> Attachments: HBASE-19379-branch-1.patch, HBASE-19379-branch-1.patch
>
>
> TestEndToEndSplitTransaction
> ---
> Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 44.71 s <<< 
> FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
> testFromClientSideWhileSplitting(org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction)
>   Time elapsed: 18.913 s  <<< ERROR!
> java.lang.NullPointerException



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19382) Update report-flakies.py script to handle yetus builds

2017-11-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272178#comment-16272178
 ] 

stack commented on HBASE-19382:
---

+1 Try it.

> Update report-flakies.py script to handle yetus builds
> --
>
> Key: HBASE-19382
> URL: https://issues.apache.org/jira/browse/HBASE-19382
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-19382.master.001.patch
>
>
> With move to new nightly build which uses yetus 
> (https://builds.apache.org/job/HBase%20Nightly/job/master/), current 
> report-flakies.py is not able to build test report since maven output is not 
> in consoleText anymore. Update script to accept both traditional builds 
> (maven output in consoleTest, for flakies runner job)  and yetus builds 
> (maven output in artifacts).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19378) Backport HBASE-19252 "Move the transform logic of FilterList into transformCell() method to avoid extra ref to question cell"

2017-11-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19378:
--
Fix Version/s: (was: 2.0.0-beta-1)
   (was: 3.0.0)

> Backport HBASE-19252 "Move the transform logic of FilterList into 
> transformCell() method to avoid extra ref to question cell"
> -
>
> Key: HBASE-19378
> URL: https://issues.apache.org/jira/browse/HBASE-19378
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Reporter: stack
>Assignee: Zheng Hu
>Priority: Critical
> Fix For: 1.4.1
>
> Attachments: HBASE-19252-branch-1.4.v1.patch
>
>
> Backport the parent to branch-1. It is taking a while to get it in so created 
> new subtask.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19378) Backport HBASE-19252 "Move the transform logic of FilterList into transformCell() method to avoid extra ref to question cell"

2017-11-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272174#comment-16272174
 ] 

stack commented on HBASE-19378:
---

Retry? Maybe HBASE-19379 got it?

> Backport HBASE-19252 "Move the transform logic of FilterList into 
> transformCell() method to avoid extra ref to question cell"
> -
>
> Key: HBASE-19378
> URL: https://issues.apache.org/jira/browse/HBASE-19378
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Reporter: stack
>Assignee: Zheng Hu
>Priority: Critical
> Fix For: 3.0.0, 1.4.1, 2.0.0-beta-1
>
> Attachments: HBASE-19252-branch-1.4.v1.patch
>
>
> Backport the parent to branch-1. It is taking a while to get it in so created 
> new subtask.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19383) [1.2] java.lang.AssertionError: expected:<2> but was:<1> at org.apache.hadoop.hbase.TestChoreService.testTriggerNowFailsWhenNotScheduled(TestChoreService.java:707)

2017-11-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272172#comment-16272172
 ] 

Hudson commented on HBASE-19383:


FAILURE: Integrated in Jenkins build HBase-1.2-IT #1026 (See 
[https://builds.apache.org/job/HBase-1.2-IT/1026/])
HBASE-19383 [1.2] java.lang.AssertionError: expected:<2> but was:<1> at (stack: 
rev 93b380dd9638b46e7abd37082a1e5f64f78586fc)
* (edit) 
hbase-common/src/test/java/org/apache/hadoop/hbase/TestChoreService.java


> [1.2] java.lang.AssertionError: expected:<2> but was:<1>  at 
> org.apache.hadoop.hbase.TestChoreService.testTriggerNowFailsWhenNotScheduled(TestChoreService.java:707)
> 
>
> Key: HBASE-19383
> URL: https://issues.apache.org/jira/browse/HBASE-19383
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 1.3.2, 1.4.1, 1.2.7, 2.0.0-beta-2
>
> Attachments: 19383.txt
>
>
> A test that is based on timers that asserts hard numbers about how many times 
> something is called. I'm just going to remove it. It killed my 1.2 nightly 
> test run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18112) Write RequestTooBigException back to client for NettyRpcServer

2017-11-29 Thread Toshihiro Suzuki (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272170#comment-16272170
 ] 

Toshihiro Suzuki commented on HBASE-18112:
--

I submitted a wrong patch mistakenly. I just resubmitted the correct patch.

> Write RequestTooBigException back to client for NettyRpcServer
> --
>
> Key: HBASE-18112
> URL: https://issues.apache.org/jira/browse/HBASE-18112
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Reporter: Duo Zhang
>Assignee: Toshihiro Suzuki
> Attachments: HBASE-18112-v2.patch, HBASE-18112-v2.patch, 
> HBASE-18112-v2.patch, HBASE-18112-v3.patch, HBASE-18112-v3.patch, 
> HBASE-18112-v4.patch, HBASE-18112-v4.patch, HBASE-18112-v5.patch, 
> HBASE-18112.patch
>
>
> For now we just close the connection so NettyRpcServer can not pass TestIPC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19378) Backport HBASE-19252 "Move the transform logic of FilterList into transformCell() method to avoid extra ref to question cell"

2017-11-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272171#comment-16272171
 ] 

stack commented on HBASE-19378:
---

Failure doesn't look related.  TestHRegionLocation.testHashAndEqualsCode:55 
That ain't you? Is it a problem in 1.4?

> Backport HBASE-19252 "Move the transform logic of FilterList into 
> transformCell() method to avoid extra ref to question cell"
> -
>
> Key: HBASE-19378
> URL: https://issues.apache.org/jira/browse/HBASE-19378
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Reporter: stack
>Assignee: Zheng Hu
>Priority: Critical
> Fix For: 3.0.0, 1.4.1, 2.0.0-beta-1
>
> Attachments: HBASE-19252-branch-1.4.v1.patch
>
>
> Backport the parent to branch-1. It is taking a while to get it in so created 
> new subtask.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18112) Write RequestTooBigException back to client for NettyRpcServer

2017-11-29 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-18112:
-
Attachment: HBASE-18112-v5.patch

> Write RequestTooBigException back to client for NettyRpcServer
> --
>
> Key: HBASE-18112
> URL: https://issues.apache.org/jira/browse/HBASE-18112
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Reporter: Duo Zhang
>Assignee: Toshihiro Suzuki
> Attachments: HBASE-18112-v2.patch, HBASE-18112-v2.patch, 
> HBASE-18112-v2.patch, HBASE-18112-v3.patch, HBASE-18112-v3.patch, 
> HBASE-18112-v4.patch, HBASE-18112-v4.patch, HBASE-18112-v5.patch, 
> HBASE-18112.patch
>
>
> For now we just close the connection so NettyRpcServer can not pass TestIPC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18112) Write RequestTooBigException back to client for NettyRpcServer

2017-11-29 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-18112:
-
Attachment: (was: HBASE-18112-v5.patch)

> Write RequestTooBigException back to client for NettyRpcServer
> --
>
> Key: HBASE-18112
> URL: https://issues.apache.org/jira/browse/HBASE-18112
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Reporter: Duo Zhang
>Assignee: Toshihiro Suzuki
> Attachments: HBASE-18112-v2.patch, HBASE-18112-v2.patch, 
> HBASE-18112-v2.patch, HBASE-18112-v3.patch, HBASE-18112-v3.patch, 
> HBASE-18112-v4.patch, HBASE-18112-v4.patch, HBASE-18112-v5.patch, 
> HBASE-18112.patch
>
>
> For now we just close the connection so NettyRpcServer can not pass TestIPC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272169#comment-16272169
 ] 

stack commented on HBASE-17852:
---

Moving out of beta-1.  I ask questions and get rubbish back.

Contributor has wrong attitude. Operators who'd rather avoid reading logs and 
having to run repair tools are 'lazy'.

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch, HBASE-17852-v7.patch, HBASE-17852-v8.patch, 
> HBASE-17852-v9.patch
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data written by backup observers 
> during failed operation may be lost. This is what we try to avoid.
> # Second table keeps only bulk load related references. We do not care about 
> consistency of this table, because bulk load is idempotent operation and can 
> be repeated after failure. Partially written data in second table does not 
> affect on BackupHFileCleaner plugin, because this data (list of bulk loaded 
> files) correspond to a files which have not been loaded yet successfully and, 
> hence - are not visible to the system 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17852:
--
Fix Version/s: (was: 2.0.0-beta-1)
   2.0.0

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch, HBASE-17852-v7.patch, HBASE-17852-v8.patch, 
> HBASE-17852-v9.patch
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data written by backup observers 
> during failed operation may be lost. This is what we try to avoid.
> # Second table keeps only bulk load related references. We do not care about 
> consistency of this table, because bulk load is idempotent operation and can 
> be repeated after failure. Partially written data in second table does not 
> affect on BackupHFileCleaner plugin, because this data (list of bulk loaded 
> files) correspond to a files which have not been loaded yet successfully and, 
> hence - are not visible to the system 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18112) Write RequestTooBigException back to client for NettyRpcServer

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272163#comment-16272163
 ] 

Hadoop QA commented on HBASE-18112:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HBASE-18112 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.6.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-18112 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899944/HBASE-18112-v5.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10134/console |
| Powered by | Apache Yetus 0.6.0   http://yetus.apache.org |


This message was automatically generated.



> Write RequestTooBigException back to client for NettyRpcServer
> --
>
> Key: HBASE-18112
> URL: https://issues.apache.org/jira/browse/HBASE-18112
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Reporter: Duo Zhang
>Assignee: Toshihiro Suzuki
> Attachments: HBASE-18112-v2.patch, HBASE-18112-v2.patch, 
> HBASE-18112-v2.patch, HBASE-18112-v3.patch, HBASE-18112-v3.patch, 
> HBASE-18112-v4.patch, HBASE-18112-v4.patch, HBASE-18112-v5.patch, 
> HBASE-18112.patch
>
>
> For now we just close the connection so NettyRpcServer can not pass TestIPC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-19383) [1.2] java.lang.AssertionError: expected:<2> but was:<1> at org.apache.hadoop.hbase.TestChoreService.testTriggerNowFailsWhenNotScheduled(TestChoreService.java:707)

2017-11-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HBASE-19383:
-

 Assignee: stack
Fix Version/s: 2.0.0-beta-2
   1.2.7
   1.4.1
   1.3.2
  Component/s: test

So, leaving open till all clear on branch-1.4. I'll then push it there and 
close the issue out.

> [1.2] java.lang.AssertionError: expected:<2> but was:<1>  at 
> org.apache.hadoop.hbase.TestChoreService.testTriggerNowFailsWhenNotScheduled(TestChoreService.java:707)
> 
>
> Key: HBASE-19383
> URL: https://issues.apache.org/jira/browse/HBASE-19383
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Assignee: stack
> Fix For: 1.3.2, 1.4.1, 1.2.7, 2.0.0-beta-2
>
> Attachments: 19383.txt
>
>
> A test that is based on timers that asserts hard numbers about how many times 
> something is called. I'm just going to remove it. It killed my 1.2 nightly 
> test run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19383) [1.2] java.lang.AssertionError: expected:<2> but was:<1> at org.apache.hadoop.hbase.TestChoreService.testTriggerNowFailsWhenNotScheduled(TestChoreService.java:707)

2017-11-29 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19383:
--
Attachment: 19383.txt

I pushed this to branch-1.2, 1.3, 1, 2, and master. I did not push to 1.4 yet 
guessing @apurtell is making an RC. I'll push it in when get all clear.

> [1.2] java.lang.AssertionError: expected:<2> but was:<1>  at 
> org.apache.hadoop.hbase.TestChoreService.testTriggerNowFailsWhenNotScheduled(TestChoreService.java:707)
> 
>
> Key: HBASE-19383
> URL: https://issues.apache.org/jira/browse/HBASE-19383
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
> Attachments: 19383.txt
>
>
> A test that is based on timers that asserts hard numbers about how many times 
> something is called. I'm just going to remove it. It killed my 1.2 nightly 
> test run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18112) Write RequestTooBigException back to client for NettyRpcServer

2017-11-29 Thread Toshihiro Suzuki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Toshihiro Suzuki updated HBASE-18112:
-
Attachment: HBASE-18112-v5.patch

> Write RequestTooBigException back to client for NettyRpcServer
> --
>
> Key: HBASE-18112
> URL: https://issues.apache.org/jira/browse/HBASE-18112
> Project: HBase
>  Issue Type: Sub-task
>  Components: IPC/RPC
>Reporter: Duo Zhang
>Assignee: Toshihiro Suzuki
> Attachments: HBASE-18112-v2.patch, HBASE-18112-v2.patch, 
> HBASE-18112-v2.patch, HBASE-18112-v3.patch, HBASE-18112-v3.patch, 
> HBASE-18112-v4.patch, HBASE-18112-v4.patch, HBASE-18112-v5.patch, 
> HBASE-18112.patch
>
>
> For now we just close the connection so NettyRpcServer can not pass TestIPC.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19384) Results returned by preAppend hook in a coprocessor are replaced with null from other coprocessor even on bypass

2017-11-29 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272158#comment-16272158
 ] 

Rajeshbabu Chintaguntla commented on HBASE-19384:
-

Ping  [~anoop.hbase] [~elserj] [~sergey.soldatov] [~an...@apache.org]. 

> Results returned by preAppend hook in a coprocessor are replaced with null 
> from other coprocessor even on bypass
> 
>
> Key: HBASE-19384
> URL: https://issues.apache.org/jira/browse/HBASE-19384
> Project: HBase
>  Issue Type: Bug
>  Components: Coprocessors
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 3.0.0, 2.0.0-beta-1
>
>
> Phoenix adding multiple coprocessors for a table and one of them has 
> preAppend and preIncrement implementation and bypass the operations by 
> returning the results. But the other coprocessors which doesn't have any 
> implementation returning null and the results returned by previous 
> coprocessor are override by null and always going with default implementation 
> of append and increment operations. But it's not the case with old versions 
> and works fine on bypass.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19384) Results returned by preAppend hook in a coprocessor are replaced with null from other coprocessor even on bypass

2017-11-29 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-19384:
---

 Summary: Results returned by preAppend hook in a coprocessor are 
replaced with null from other coprocessor even on bypass
 Key: HBASE-19384
 URL: https://issues.apache.org/jira/browse/HBASE-19384
 Project: HBase
  Issue Type: Bug
  Components: Coprocessors
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 3.0.0, 2.0.0-beta-1


Phoenix adding multiple coprocessors for a table and one of them has preAppend 
and preIncrement implementation and bypass the operations by returning the 
results. But the other coprocessors which doesn't have any implementation 
returning null and the results returned by previous coprocessor are override by 
null and always going with default implementation of append and increment 
operations. But it's not the case with old versions and works fine on bypass.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19381) TestGlobalThrottler doesn't make progress (branch-1.4)

2017-11-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272156#comment-16272156
 ] 

stack commented on HBASE-19381:
---

+1

> TestGlobalThrottler doesn't make progress (branch-1.4)
> --
>
> Key: HBASE-19381
> URL: https://issues.apache.org/jira/browse/HBASE-19381
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.4.0
>
> Attachments: HBASE-19381-branch-1.patch
>
>
> After a while test prints the following until timeout
> 2017-11-30 00:48:34,925 INFO  [main] regionserver.TestGlobalThrottler(165): 
> Waiting all logs pushed to slave. Expected 50 , actual 0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19383) [1.2] java.lang.AssertionError: expected:<2> but was:<1> at org.apache.hadoop.hbase.TestChoreService.testTriggerNowFailsWhenNotScheduled(TestChoreService.java:707)

2017-11-29 Thread stack (JIRA)
stack created HBASE-19383:
-

 Summary: [1.2] java.lang.AssertionError: expected:<2> but was:<1>  
at 
org.apache.hadoop.hbase.TestChoreService.testTriggerNowFailsWhenNotScheduled(TestChoreService.java:707)
 Key: HBASE-19383
 URL: https://issues.apache.org/jira/browse/HBASE-19383
 Project: HBase
  Issue Type: Bug
Reporter: stack


A test that is based on timers that asserts hard numbers about how many times 
something is called. I'm just going to remove it. It killed my 1.2 nightly test 
run.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19379) TestEndToEndSplitTransaction fails with NPE

2017-11-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272152#comment-16272152
 ] 

stack commented on HBASE-19379:
---

LGTM

> TestEndToEndSplitTransaction fails with NPE
> ---
>
> Key: HBASE-19379
> URL: https://issues.apache.org/jira/browse/HBASE-19379
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 1.4.0
>
> Attachments: HBASE-19379-branch-1.patch, HBASE-19379-branch-1.patch
>
>
> TestEndToEndSplitTransaction
> ---
> Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 44.71 s <<< 
> FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
> testFromClientSideWhileSplitting(org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction)
>   Time elapsed: 18.913 s  <<< ERROR!
> java.lang.NullPointerException



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19376) Fix more binary compatibility problems with branch-1.4 / branch-1

2017-11-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272144#comment-16272144
 ] 

Hudson commented on HBASE-19376:


FAILURE: Integrated in Jenkins build HBase-1.5 #174 (See 
[https://builds.apache.org/job/HBase-1.5/174/])
HBASE-19376 Fix more binary compatibility problems with branch-1.4 / (apurtell: 
rev 4c413e0c50777e1d0cbe72f8f081da96063913c0)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/ClusterStatusPublisher.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/snapshot/SnapshotManager.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/FilterList.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/TableInputFormatBase.java


> Fix more binary compatibility problems with branch-1.4 / branch-1
> -
>
> Key: HBASE-19376
> URL: https://issues.apache.org/jira/browse/HBASE-19376
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 1.4.0
>
> Attachments: HBASE-19376-branch-1.
>
>
> A few minor ones. 
> Trivial fixes. Compatibility constructors. Some methods removed without 
> deprecation. All involving interfaces tagged Public.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19378) Backport HBASE-19252 "Move the transform logic of FilterList into transformCell() method to avoid extra ref to question cell"

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272139#comment-16272139
 ] 

Hadoop QA commented on HBASE-19378:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
26s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1.4 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
22s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
49s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
58s{color} | {color:red} hbase-client in branch-1.4 has 1 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
57s{color} | {color:red} hbase-server in branch-1.4 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} hbase-client: The patch generated 0 new + 124 
unchanged - 2 fixed = 124 total (was 126) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} The patch hbase-server passed checkstyle {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
39s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 39s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} 

[jira] [Comment Edited] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-29 Thread Chance Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272066#comment-16272066
 ] 

Chance Li edited comment on HBASE-19344 at 11/30/17 3:34 AM:
-

These tests are based on SSD.
And the result is what we expect.



was (Author: chancelq):
These tests are based on SSD.
And the result is what we expect.
!HBASE-19344-branch.ycsb.png!

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0-beta-1
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0
>
> Attachments: HBASE-19344-branch-ycsb-1.png, 
> HBASE-19344-branch.ycsb.png, HBASE-19344-branch.ycsb.png, 
> HBASE-19344-branch2.patch, HBASE-19344-branch2.patch.2.POC, 
> wal-1-test-result.png, wal-8-test-result.png, 
> ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-29 Thread Chance Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chance Li updated HBASE-19344:
--
Comment: was deleted

(was: !HBASE-19344-branch-ycsb-1.png!)

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0-beta-1
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0
>
> Attachments: HBASE-19344-branch-ycsb-1.png, 
> HBASE-19344-branch.ycsb.png, HBASE-19344-branch.ycsb.png, 
> HBASE-19344-branch2.patch, HBASE-19344-branch2.patch.2.POC, 
> wal-1-test-result.png, wal-8-test-result.png, 
> ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-29 Thread Chance Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272066#comment-16272066
 ] 

Chance Li edited comment on HBASE-19344 at 11/30/17 3:32 AM:
-

These tests are based on SSD.
And the result is what we expect.
!HBASE-19344-branch.ycsb.png!


was (Author: chancelq):
These tests are based on SSD.
And the result is what we expect.
!HBASE-19344-branch.ycsb.png!

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0-beta-1
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0
>
> Attachments: HBASE-19344-branch-ycsb-1.png, 
> HBASE-19344-branch.ycsb.png, HBASE-19344-branch.ycsb.png, 
> HBASE-19344-branch2.patch, HBASE-19344-branch2.patch.2.POC, 
> wal-1-test-result.png, wal-8-test-result.png, 
> ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-29 Thread Chance Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272108#comment-16272108
 ] 

Chance Li commented on HBASE-19344:
---

!HBASE-19344-branch-ycsb-1.png!

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0-beta-1
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0
>
> Attachments: HBASE-19344-branch-ycsb-1.png, 
> HBASE-19344-branch.ycsb.png, HBASE-19344-branch.ycsb.png, 
> HBASE-19344-branch2.patch, HBASE-19344-branch2.patch.2.POC, 
> wal-1-test-result.png, wal-8-test-result.png, 
> ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272106#comment-16272106
 ] 

Hadoop QA commented on HBASE-17852:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  2m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 19 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
41s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
17s{color} | {color:red} hbase-backup: The patch generated 4 new + 179 
unchanged - 18 fixed = 183 total (was 197) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 670 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m 
16s{color} | {color:red} The patch 384 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
17s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
54m 25s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
16s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hbase-it in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 86m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-17852 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899932/HBASE-17852-v9.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux b272d49628c9 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-29 Thread Chance Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chance Li updated HBASE-19344:
--
Attachment: HBASE-19344-branch-ycsb-1.png

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0-beta-1
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0
>
> Attachments: HBASE-19344-branch-ycsb-1.png, 
> HBASE-19344-branch.ycsb.png, HBASE-19344-branch.ycsb.png, 
> HBASE-19344-branch2.patch, HBASE-19344-branch2.patch.2.POC, 
> wal-1-test-result.png, wal-8-test-result.png, 
> ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-29 Thread Chance Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chance Li updated HBASE-19344:
--
Comment: was deleted

(was: !HBASE-19344-branch.ycsb.png!)

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0-beta-1
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0
>
> Attachments: HBASE-19344-branch.ycsb.png, 
> HBASE-19344-branch.ycsb.png, HBASE-19344-branch2.patch, 
> HBASE-19344-branch2.patch.2.POC, wal-1-test-result.png, 
> wal-8-test-result.png, ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-29 Thread Chance Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chance Li updated HBASE-19344:
--
Attachment: HBASE-19344-branch.ycsb.png

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0-beta-1
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0
>
> Attachments: HBASE-19344-branch.ycsb.png, 
> HBASE-19344-branch.ycsb.png, HBASE-19344-branch2.patch, 
> HBASE-19344-branch2.patch.2.POC, wal-1-test-result.png, 
> wal-8-test-result.png, ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-29 Thread Chance Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272104#comment-16272104
 ] 

Chance Li commented on HBASE-19344:
---

!HBASE-19344-branch.ycsb.png!

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0-beta-1
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0
>
> Attachments: HBASE-19344-branch.ycsb.png, 
> HBASE-19344-branch.ycsb.png, HBASE-19344-branch2.patch, 
> HBASE-19344-branch2.patch.2.POC, wal-1-test-result.png, 
> wal-8-test-result.png, ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19379) TestEndToEndSplitTransaction fails with NPE

2017-11-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272099#comment-16272099
 ] 

Andrew Purtell commented on HBASE-19379:


Pushed to branch-1.4 and branch-1

> TestEndToEndSplitTransaction fails with NPE
> ---
>
> Key: HBASE-19379
> URL: https://issues.apache.org/jira/browse/HBASE-19379
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 1.4.0
>
> Attachments: HBASE-19379-branch-1.patch, HBASE-19379-branch-1.patch
>
>
> TestEndToEndSplitTransaction
> ---
> Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 44.71 s <<< 
> FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
> testFromClientSideWhileSplitting(org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction)
>   Time elapsed: 18.913 s  <<< ERROR!
> java.lang.NullPointerException



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-19379) TestEndToEndSplitTransaction fails with NPE

2017-11-29 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-19379.

Resolution: Fixed

Just fix the NPE in compareTo and related problem with getting a hashCode if 
the serverName field is null. 

> TestEndToEndSplitTransaction fails with NPE
> ---
>
> Key: HBASE-19379
> URL: https://issues.apache.org/jira/browse/HBASE-19379
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 1.4.0
>
> Attachments: HBASE-19379-branch-1.patch, HBASE-19379-branch-1.patch
>
>
> TestEndToEndSplitTransaction
> ---
> Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 44.71 s <<< 
> FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
> testFromClientSideWhileSplitting(org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction)
>   Time elapsed: 18.913 s  <<< ERROR!
> java.lang.NullPointerException



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19379) TestEndToEndSplitTransaction fails with NPE

2017-11-29 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-19379:
---
Attachment: HBASE-19379-branch-1.patch

> TestEndToEndSplitTransaction fails with NPE
> ---
>
> Key: HBASE-19379
> URL: https://issues.apache.org/jira/browse/HBASE-19379
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 1.4.0
>
> Attachments: HBASE-19379-branch-1.patch, HBASE-19379-branch-1.patch
>
>
> TestEndToEndSplitTransaction
> ---
> Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 44.71 s <<< 
> FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
> testFromClientSideWhileSplitting(org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction)
>   Time elapsed: 18.913 s  <<< ERROR!
> java.lang.NullPointerException



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19367) Refactoring in RegionStates, and RSProcedureDispatcher

2017-11-29 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272095#comment-16272095
 ] 

Appy commented on HBASE-19367:
--

v4 addressing test failure.

> Refactoring in RegionStates, and RSProcedureDispatcher
> --
>
> Key: HBASE-19367
> URL: https://issues.apache.org/jira/browse/HBASE-19367
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19367.master.001.patch, 
> HBASE-19367.master.002.patch, HBASE-19367.master.003.patch, 
> HBASE-19367.master.004.patch
>
>
> Working on a bug fix, was in these parts for first time to understand new AM 
> and trying to make sense of things. Did a few improvements on the way.
> - Adding javadoc comments
> - Bug: ServerStateNode#regions is HashSet but there's no synchronization to 
> prevent concurrent addRegion/removeRegion. Let's use concurrent set instead.
> - Use getRegionsInTransitionCount() directly to avoid instead of 
> getRegionsInTransition().size() because the latter copies everything into a 
> new array - what a waste for just the size.
> - There's mixed use of getRegionNode and getRegionStateNode for same return 
> type - RegionStateNode. Changing everything to getRegionStateNode. Similarly 
> rename other *RegionNode() fns to *RegionStateNode().
> - RegionStateNode#transitionState() return value is useless since it always 
> returns it's first param.
> - Other minor improvements



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19367) Refactoring in RegionStates, and RSProcedureDispatcher

2017-11-29 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19367:
-
Attachment: HBASE-19367.master.004.patch

> Refactoring in RegionStates, and RSProcedureDispatcher
> --
>
> Key: HBASE-19367
> URL: https://issues.apache.org/jira/browse/HBASE-19367
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>Assignee: Appy
>Priority: Minor
> Attachments: HBASE-19367.master.001.patch, 
> HBASE-19367.master.002.patch, HBASE-19367.master.003.patch, 
> HBASE-19367.master.004.patch
>
>
> Working on a bug fix, was in these parts for first time to understand new AM 
> and trying to make sense of things. Did a few improvements on the way.
> - Adding javadoc comments
> - Bug: ServerStateNode#regions is HashSet but there's no synchronization to 
> prevent concurrent addRegion/removeRegion. Let's use concurrent set instead.
> - Use getRegionsInTransitionCount() directly to avoid instead of 
> getRegionsInTransition().size() because the latter copies everything into a 
> new array - what a waste for just the size.
> - There's mixed use of getRegionNode and getRegionStateNode for same return 
> type - RegionStateNode. Changing everything to getRegionStateNode. Similarly 
> rename other *RegionNode() fns to *RegionStateNode().
> - RegionStateNode#transitionState() return value is useless since it always 
> returns it's first param.
> - Other minor improvements



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Reopened] (HBASE-19379) TestEndToEndSplitTransaction fails with NPE

2017-11-29 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell reopened HBASE-19379:


Whoops, have to do this differently. Reverted for now. 

> TestEndToEndSplitTransaction fails with NPE
> ---
>
> Key: HBASE-19379
> URL: https://issues.apache.org/jira/browse/HBASE-19379
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 1.4.0
>
> Attachments: HBASE-19379-branch-1.patch
>
>
> TestEndToEndSplitTransaction
> ---
> Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 44.71 s <<< 
> FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
> testFromClientSideWhileSplitting(org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction)
>   Time elapsed: 18.913 s  <<< ERROR!
> java.lang.NullPointerException



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-17852) Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental backup)

2017-11-29 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16271978#comment-16271978
 ] 

Vladimir Rodionov edited comment on HBASE-17852 at 11/30/17 3:04 AM:
-

{quote}
You don't answer the question
{quote}

What question? What does "corrupt" mean? Why do I need to restore meta table? I 
am afraid, I can't add anything else to my answers above.

{quote}
Don't follow. An operator sets up a cron job. Works great for a few days. Then 
it stops. Operator needs to figure that he has to run a repair. Operator sets 
up two cron jobs? Or cron probes first for breakage...
{quote}

Stops means fails. If cron job fails, operator will need to intervene, read 
logs, manuals and figure out that repair is required. Not a big deal, imo. We 
clearly log message, that repair tool has to be run. But for lazy operators I 
will add auto-repir mode of execution (see above ticket).

I would like to add that repair will be required very rarely. *Any server-side 
backup failures are taken care of automatically* - no need to run repair tool. 
*Backup will be marked as failed in a backup meta table*. Only if client (cron 
in this case) exits abruptly, only then repair will be required. 

Stack, can you be more technical and specific in your questions? The patch is 
no 8 already. Do you have any code - related questions and comments? If yes, 
then RB is the right place to put them on.


was (Author: vrodionov):
{quote}
You don't answer the question
{quote}

What question? What does "corrupt" mean? Why do I need to restore meta table? I 
am afraid, I can't add anything else to my answers above.

{quote}
Don't follow. An operator sets up a cron job. Works great for a few days. Then 
it stops. Operator needs to figure that he has to run a repair. Operator sets 
up two cron jobs? Or cron probes first for breakage...
{quote}

Stops means fails. If cron job fails, operator will need to intervene, read 
logs, manuals and figure out that repair is required. Not a big deal, imo. We 
clearly log message, that repair tool has to be run. But for lazy operators I 
will add auto-repir mode of execution (see above ticket).

I would like to add that repair will be required very rarely. *Any server-side 
backup failures are taken care automatically* - no need to run repair tool. 
*Backup will be marked as failed in a backup meta table*. Only if client (cron 
in this case) exits abruptly, only then repair will be required. 

Stack, can you be more technical and specific in your questions? The patch is 
no 8 already. Do you have any code - related questions and comments? If yes, 
then RB is the right place to put them on.

> Add Fault tolerance to HBASE-14417 (Support bulk loaded files in incremental 
> backup)
> 
>
> Key: HBASE-17852
> URL: https://issues.apache.org/jira/browse/HBASE-17852
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-17852-v1.patch, HBASE-17852-v2.patch, 
> HBASE-17852-v3.patch, HBASE-17852-v4.patch, HBASE-17852-v5.patch, 
> HBASE-17852-v6.patch, HBASE-17852-v7.patch, HBASE-17852-v8.patch, 
> HBASE-17852-v9.patch
>
>
> Design approach rollback-via-snapshot implemented in this ticket:
> # Before backup create/delete/merge starts we take a snapshot of the backup 
> meta-table (backup system table). This procedure is lightweight because meta 
> table is small, usually should fit a single region.
> # When operation fails on a server side, we handle this failure by cleaning 
> up partial data in backup destination, followed by restoring backup 
> meta-table from a snapshot. 
> # When operation fails on a client side (abnormal termination, for example), 
> next time user will try create/merge/delete he(she) will see error message, 
> that system is in inconsistent state and repair is required, he(she) will 
> need to run backup repair tool.
> # To avoid multiple writers to the backup system table (backup client and 
> BackupObserver's) we introduce small table ONLY to keep listing of bulk 
> loaded files. All backup observers will work only with this new tables. The 
> reason: in case of a failure during backup create/delete/merge/restore, when 
> system performs automatic rollback, some data written by backup observers 
> during failed operation may be lost. This is what we try to avoid.
> # Second table keeps only bulk load related references. We do not care about 
> consistency of this table, because bulk load is idempotent operation and can 
> be repeated after failure. Partially written data in second table does not 
> affect on BackupHFileCleaner plugin, because this data (list of bulk loaded 
> files) correspond to a 

[jira] [Updated] (HBASE-15970) Move Replication Peers into an HBase table too

2017-11-29 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-15970:
-
Attachment: HBASE-15970.v2.patch

> Move Replication Peers into an HBase table too
> --
>
> Key: HBASE-15970
> URL: https://issues.apache.org/jira/browse/HBASE-15970
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Joseph
>Assignee: Zheng Hu
> Attachments: HBASE-15970.v1.patch, HBASE-15970.v2.patch
>
>
> Currently ReplicationQueuesHBaseTableImpl relies on ReplicationStateZkImpl to 
> track information about the available replication peers (used during 
> claimQueues). We can also move this into an HBase table instead of relying on 
> ZooKeeper



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-19361) Rename the KeeperException to ReplicationException in ReplicationQueuesClient for abstracting

2017-11-29 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu reassigned HBASE-19361:


Assignee: Zheng Hu

> Rename the KeeperException to ReplicationException in ReplicationQueuesClient 
>  for abstracting
> --
>
> Key: HBASE-19361
> URL: https://issues.apache.org/jira/browse/HBASE-19361
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>
> There're still some interfaces in ReplicationQueuesClient  which throws a 
> KeeperException.   It make nonsense for ReplicationQueuesClient implemented 
> by hbase table.
> {code}
>   List getListOfReplicators() throws KeeperException;
> {code}
> File an issue to address this. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19379) TestEndToEndSplitTransaction fails with NPE

2017-11-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272080#comment-16272080
 ] 

Hudson commented on HBASE-19379:


FAILURE: Integrated in Jenkins build HBase-1.4 #1033 (See 
[https://builds.apache.org/job/HBase-1.4/1033/])
HBASE-19379 TestEndToEndSplitTransaction fails with NPE (apurtell: rev 
39da0d44e0c286d8a4129daf9ed079722b8a8c0c)
* (edit) hbase-client/src/main/java/org/apache/hadoop/hbase/HRegionLocation.java


> TestEndToEndSplitTransaction fails with NPE
> ---
>
> Key: HBASE-19379
> URL: https://issues.apache.org/jira/browse/HBASE-19379
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Trivial
> Fix For: 1.4.0
>
> Attachments: HBASE-19379-branch-1.patch
>
>
> TestEndToEndSplitTransaction
> ---
> Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 44.71 s <<< 
> FAILURE! - in 
> org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction
> testFromClientSideWhileSplitting(org.apache.hadoop.hbase.regionserver.TestEndToEndSplitTransaction)
>   Time elapsed: 18.913 s  <<< ERROR!
> java.lang.NullPointerException



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19381) TestGlobalThrottler doesn't make progress (branch-1.4)

2017-11-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272081#comment-16272081
 ] 

Hudson commented on HBASE-19381:


FAILURE: Integrated in Jenkins build HBase-1.4 #1033 (See 
[https://builds.apache.org/job/HBase-1.4/1033/])
HBASE-19381 TestGlobalThrottler doesn't make progress (apurtell: rev 
ea8123e81cb4b0e2d89fb672b5bfe67557852ec0)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java
* (edit) hbase-common/src/main/java/org/apache/hadoop/hbase/HConstants.java
* (delete) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestGlobalThrottler.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSource.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationSource.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceWALReaderThread.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/replication/regionserver/TestWALEntryStream.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java


> TestGlobalThrottler doesn't make progress (branch-1.4)
> --
>
> Key: HBASE-19381
> URL: https://issues.apache.org/jira/browse/HBASE-19381
> Project: HBase
>  Issue Type: Bug
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 1.4.0
>
> Attachments: HBASE-19381-branch-1.patch
>
>
> After a while test prints the following until timeout
> 2017-11-30 00:48:34,925 INFO  [main] regionserver.TestGlobalThrottler(165): 
> Waiting all logs pushed to slave. Expected 50 , actual 0



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19163) "Maximum lock count exceeded" from region server's batch processing

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272078#comment-16272078
 ] 

Hadoop QA commented on HBASE-19163:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
 3s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 3s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
54m 15s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 58s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.client.TestSnapshotFromClientWithRegionReplicas |
|   | hadoop.hbase.client.TestSnapshotFromClient |
|   | hadoop.hbase.client.TestSnapshotMetadata |
|   | hadoop.hbase.client.TestMobSnapshotFromClient |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:eee3b01 |
| JIRA Issue | HBASE-19163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899907/HBASE-19163.master.009.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux f906b9d5d8d0 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 
15:49:21 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / abb535eef6 |
| maven | version: Apache Maven 3.5.2 
(138edd61fd100ec658bfa2d307c43b76940a5d7d; 2017-10-18T07:58:13Z) |
| Default Java | 1.8.0_151 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10124/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/10124/testReport/ |
| modules | C: hbase-server U: 

[jira] [Updated] (HBASE-19378) Backport HBASE-19252 "Move the transform logic of FilterList into transformCell() method to avoid extra ref to question cell"

2017-11-29 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-19378:
-
Status: Patch Available  (was: Open)

> Backport HBASE-19252 "Move the transform logic of FilterList into 
> transformCell() method to avoid extra ref to question cell"
> -
>
> Key: HBASE-19378
> URL: https://issues.apache.org/jira/browse/HBASE-19378
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Reporter: stack
>Assignee: Zheng Hu
>Priority: Critical
> Fix For: 3.0.0, 1.4.1, 2.0.0-beta-1
>
> Attachments: HBASE-19252-branch-1.4.v1.patch
>
>
> Backport the parent to branch-1. It is taking a while to get it in so created 
> new subtask.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19378) Backport HBASE-19252 "Move the transform logic of FilterList into transformCell() method to avoid extra ref to question cell"

2017-11-29 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-19378:
-
Attachment: HBASE-19252-branch-1.4.v1.patch

Run Hadoop-QA 

> Backport HBASE-19252 "Move the transform logic of FilterList into 
> transformCell() method to avoid extra ref to question cell"
> -
>
> Key: HBASE-19378
> URL: https://issues.apache.org/jira/browse/HBASE-19378
> Project: HBase
>  Issue Type: Sub-task
>  Components: Filters
>Reporter: stack
>Assignee: Zheng Hu
>Priority: Critical
> Fix For: 3.0.0, 1.4.1, 2.0.0-beta-1
>
> Attachments: HBASE-19252-branch-1.4.v1.patch
>
>
> Backport the parent to branch-1. It is taking a while to get it in so created 
> new subtask.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-29 Thread Chance Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272066#comment-16272066
 ] 

Chance Li commented on HBASE-19344:
---

These tests are based on SSD.
And the result is what we expect.
!HBASE-19344-branch.ycsb.png!

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0-beta-1
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0
>
> Attachments: HBASE-19344-branch.ycsb.png, HBASE-19344-branch2.patch, 
> HBASE-19344-branch2.patch.2.POC, wal-1-test-result.png, 
> wal-8-test-result.png, ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19344) improve asyncWAL by using Independent thread for netty #IO in FanOutOneBlockAsyncDFSOutput

2017-11-29 Thread Chance Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chance Li updated HBASE-19344:
--
Attachment: HBASE-19344-branch.ycsb.png

> improve asyncWAL by using Independent thread for netty #IO in 
> FanOutOneBlockAsyncDFSOutput 
> ---
>
> Key: HBASE-19344
> URL: https://issues.apache.org/jira/browse/HBASE-19344
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0-beta-1
>Reporter: Chance Li
>Assignee: Chance Li
> Fix For: 2.0.0
>
> Attachments: HBASE-19344-branch.ycsb.png, HBASE-19344-branch2.patch, 
> HBASE-19344-branch2.patch.2.POC, wal-1-test-result.png, 
> wal-8-test-result.png, ycsb_result_apache20_async_wal.pdf
>
>
> The logic now is that the netty #IO thread and asyncWal's thread are the same 
> one.
> Improvement proposal:
> 1, Split into two.
> 2, All multiWal share the netty #IO thread pool. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18233) We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272061#comment-16272061
 ] 

Hadoop QA commented on HBASE-18233:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1.4 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
17s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
46s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} branch-1.4 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
42s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
53s{color} | {color:red} hbase-server in branch-1.4 has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} branch-1.4 passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} branch-1.4 passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} hbase-server: The patch generated 0 new + 334 
unchanged - 1 fixed = 334 total (was 335) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
20s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
43m  0s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK v1.8.0_152 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed with JDK v1.7.0_161 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}133m 31s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}213m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestEndToEndSplitTransaction |
|   | hadoop.hbase.TestPartialResultsFromClientSide |
|   | hadoop.hbase.replication.regionserver.TestGlobalThrottler |
|   | 

[jira] [Commented] (HBASE-18895) Implement changes eliminated during HTrace update

2017-11-29 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272060#comment-16272060
 ] 

Mike Drob commented on HBASE-18895:
---

I think let's clean up now and worry about getting it to work properly in 2.1 
maybe? Should release note that tracing is incomplete. 

> Implement changes eliminated during HTrace update
> -
>
> Key: HBASE-18895
> URL: https://issues.apache.org/jira/browse/HBASE-18895
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha-3
>Reporter: Tamas Penzes
>Priority: Minor
>
> HTrace 4 is not fully compatible with HTrace 3.
> Some functionalities were generally changed and couldn't have been migrated.
> Due this ticket they should be handled or removed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   4   >