[jira] [Updated] (HBASE-21593) closing flags show be set false in HRegion

2018-12-13 Thread xiaolerzheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaolerzheng updated HBASE-21593:
-
Attachment: image-2018-12-13-16-04-51-892.png

> closing flags show be set false in HRegion
> --
>
> Key: HBASE-21593
> URL: https://issues.apache.org/jira/browse/HBASE-21593
> Project: HBase
>  Issue Type: Bug
>Reporter: xiaolerzheng
>Priority: Minor
> Attachments: image-2018-12-13-16-04-51-892.png
>
>
> in HRegion.java
>  
>  
> 1429 // block waiting for the lock for closing
> 1430 lock.writeLock().lock();
> 1431 this.closing.set(true);
> 1432 status.setStatus("Disabling writes for close");
>  
> 
>  
>  
> 1557 } finally {
>        {color:#FF}  //should here add {color}
>     {color:#FF}    this.closing.set(false); {color}
> 1558  lock.writeLock().unlock();
> 1559 }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21593) closing flags show be set false in HRegion

2018-12-13 Thread xiaolerzheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaolerzheng updated HBASE-21593:
-
Attachment: image-2018-12-13-16-05-36-404.png

> closing flags show be set false in HRegion
> --
>
> Key: HBASE-21593
> URL: https://issues.apache.org/jira/browse/HBASE-21593
> Project: HBase
>  Issue Type: Bug
>Reporter: xiaolerzheng
>Priority: Minor
> Attachments: image-2018-12-13-16-04-51-892.png, 
> image-2018-12-13-16-05-09-246.png, image-2018-12-13-16-05-36-404.png
>
>
> in HRegion.java
>  
>  
> 1429 // block waiting for the lock for closing
> 1430 lock.writeLock().lock();
> 1431 this.closing.set(true);
> 1432 status.setStatus("Disabling writes for close");
>  
> 
>  
>  
> 1557 } finally {
>        {color:#FF}  //should here add {color}
>     {color:#FF}    this.closing.set(false); {color}
> 1558  lock.writeLock().unlock();
> 1559 }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21593) closing flags show be set false in HRegion

2018-12-13 Thread xiaolerzheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16719887#comment-16719887
 ] 

xiaolerzheng commented on HBASE-21593:
--

I met some strange problem:

when I export some row putted in [start_time, end_time) using command:  

$hbase org.apache.hadoop.hbase.mapreduce.Export RAW.LOAN_APP_FLOW 
hdfs://10.1.170.1:8020/tmp/export/loan_app_flow 1 154385280 154393920

 no result was return

 

 

 

 

18/12/12 14:54:23 INFO mapreduce.Job: Counters: 20
 File System Counters
 FILE: Number of bytes read=584860554
 FILE: Number of bytes written=592091874
 FILE: Number of read operations=0
 FILE: Number of large read operations=0
 FILE: Number of write operations=0
 HDFS: Number of bytes read=0
 HDFS: Number of bytes written=606
 HDFS: Number of read operations=27
 HDFS: Number of large read operations=0
 HDFS: Number of write operations=12
 Map-Reduce Framework
 {color:#FF}Map input records=0{color}
{color:#FF} Map output records=0{color}
 Input split bytes=827
 Spilled Records=0
 Failed Shuffles=0
 Merged Map outputs=0
 GC time elapsed (ms)=7
 Total committed heap usage (bytes)=4102029312
 File Input Format Counters
 Bytes Read=0
 File Output Format Counter

 

 

 

debug log:

 

!image-2018-12-13-16-04-51-892.png!

 

!image-2018-12-13-16-05-09-246.png!

!image-2018-12-13-16-05-36-404.png!

 

 

 

 

 

> closing flags show be set false in HRegion
> --
>
> Key: HBASE-21593
> URL: https://issues.apache.org/jira/browse/HBASE-21593
> Project: HBase
>  Issue Type: Bug
>Reporter: xiaolerzheng
>Priority: Minor
> Attachments: image-2018-12-13-16-04-51-892.png, 
> image-2018-12-13-16-05-09-246.png, image-2018-12-13-16-05-36-404.png
>
>
> in HRegion.java
>  
>  
> 1429 // block waiting for the lock for closing
> 1430 lock.writeLock().lock();
> 1431 this.closing.set(true);
> 1432 status.setStatus("Disabling writes for close");
>  
> 
>  
>  
> 1557 } finally {
>        {color:#FF}  //should here add {color}
>     {color:#FF}    this.closing.set(false); {color}
> 1558  lock.writeLock().unlock();
> 1559 }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21593) closing flags show be set false in HRegion

2018-12-13 Thread xiaolerzheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaolerzheng updated HBASE-21593:
-
Attachment: image-2018-12-13-16-05-09-246.png

> closing flags show be set false in HRegion
> --
>
> Key: HBASE-21593
> URL: https://issues.apache.org/jira/browse/HBASE-21593
> Project: HBase
>  Issue Type: Bug
>Reporter: xiaolerzheng
>Priority: Minor
> Attachments: image-2018-12-13-16-04-51-892.png, 
> image-2018-12-13-16-05-09-246.png, image-2018-12-13-16-05-36-404.png
>
>
> in HRegion.java
>  
>  
> 1429 // block waiting for the lock for closing
> 1430 lock.writeLock().lock();
> 1431 this.closing.set(true);
> 1432 status.setStatus("Disabling writes for close");
>  
> 
>  
>  
> 1557 } finally {
>        {color:#FF}  //should here add {color}
>     {color:#FF}    this.closing.set(false); {color}
> 1558  lock.writeLock().unlock();
> 1559 }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21514) Refactor CacheConfig

2018-12-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16719899#comment-16719899
 ] 

Hadoop QA commented on HBASE-21514:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 37 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 9s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 59s{color} 
| {color:red} hbase-server generated 4 new + 184 unchanged - 4 fixed = 188 
total (was 188) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
28s{color} | {color:red} hbase-server: The patch generated 3 new + 1040 
unchanged - 61 fixed = 1043 total (was 1101) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
15s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m 33s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m 50s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 50s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestRSStatusServlet |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21514 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951615/HBASE-21514.master.012.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux c796a8fbd73a 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / f32d261843 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| javac | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15270/artifact/patchprocess/diff-compile-javac-hbase-server.txt
 |
| checkstyle | 
https://builds.apache.org/job/Pre

[jira] [Created] (HBASE-21594) Requested block is out of range when reading hfile

2018-12-13 Thread ChenKai (JIRA)
ChenKai created HBASE-21594:
---

 Summary: Requested block is out of range when reading hfile
 Key: HBASE-21594
 URL: https://issues.apache.org/jira/browse/HBASE-21594
 Project: HBase
  Issue Type: Bug
  Components: HFile
Affects Versions: 0.98.10
Reporter: ChenKai


My HFiles are generated by Spark HBaseBulkLoad. And then when i read a few of 
them(or hbase do compact), i encounter the following exceptions.

 
{code:java}
Exception in thread "main" java.io.IOException: Requested block is out of 
range: 77329641, lastDataBlockOffset: 77329641, 
trailer.getLoadOnOpenDataOffset: 77329641
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:396)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:734)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.isNextBlock(HFileReaderV2.java:859)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.positionForNextBlock(HFileReaderV2.java:854)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2._next(HFileReaderV2.java:871)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:891)
at io.patamon.hbase.test.read.TestHFileRead.main(TestHFileRead.java:49)
{code}
Looks like `lastDataBlockOffset` is equals to 
`trailer.getLoadOnOpenDataOffset`. Could anyone help me? Thanks very much.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21594) Requested block is out of range when reading hfile

2018-12-13 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16719927#comment-16719927
 ] 

Zheng Hu commented on HBASE-21594:
--

 What's your  hbase client version and hbase server version ?  Thanks.

> Requested block is out of range when reading hfile
> --
>
> Key: HBASE-21594
> URL: https://issues.apache.org/jira/browse/HBASE-21594
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.10
>Reporter: ChenKai
>Priority: Major
>
> My HFiles are generated by Spark HBaseBulkLoad. And then when i read a few of 
> them(or hbase do compact), i encounter the following exceptions.
>  
> {code:java}
> Exception in thread "main" java.io.IOException: Requested block is out of 
> range: 77329641, lastDataBlockOffset: 77329641, 
> trailer.getLoadOnOpenDataOffset: 77329641
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:396)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:734)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.isNextBlock(HFileReaderV2.java:859)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.positionForNextBlock(HFileReaderV2.java:854)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2._next(HFileReaderV2.java:871)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:891)
> at io.patamon.hbase.test.read.TestHFileRead.main(TestHFileRead.java:49)
> {code}
> Looks like `lastDataBlockOffset` is equals to 
> `trailer.getLoadOnOpenDataOffset`. Could anyone help me? Thanks very much.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21520) TestMultiColumnScanner cost long time when using ROWCOL bloom type

2018-12-13 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-21520:
-
Attachment: rowcol.txt

> TestMultiColumnScanner cost long time when using ROWCOL bloom type
> --
>
> Key: HBASE-21520
> URL: https://issues.apache.org/jira/browse/HBASE-21520
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: rowcol.txt
>
>
> The TestMultiColumnScanner is easy to be timeout,  you can see HBASE-21517.   
> In my localhost,  when I set the parameters to be { 
> Compression.Algorithm.NONE, BloomType.ROW, false },  it took about 5 seconds. 
>  but if I set the parameters to be  { Compression.Algorithm.NONE, 
> BloomType.ROWCOL, false },  it would take about 45 seconds, which means 
> ROWCOL cost much more time than ROW.
> Need to find out what's wrong with this ut.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21520) TestMultiColumnScanner cost long time when using ROWCOL bloom type

2018-12-13 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16719936#comment-16719936
 ] 

Zheng Hu commented on HBASE-21520:
--

Add some log to show the time cost in ROWCOL case.  See the attached file, for 
row-col case,  cost about 40~80ms to scan over the table. but for ROW case, 
cost about 8ms.  
Firstly, i am wonder that only when open scanner  we need the bloom filter, so 
why the ROWCOL slow down the speed.   After add some log,  I found that the 
following stack will also caculate the bloom filter value.  I guess here is the 
problem. 
{code}
===> useBloom in requestSeek: true
java.lang.Thread.getStackTrace(Thread.java:1552)
org.apache.hadoop.hbase.regionserver.StoreFileScanner.requestSeek(StoreFileScanner.java:398)
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:318)
org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:275)
org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:989)
org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:980)
org.apache.hadoop.hbase.regionserver.StoreScanner.seekOrSkipToNextColumn(StoreScanner.java:749)
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:637)
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:153)
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:6597)
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:6761)
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:6534)
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:6511)
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:6498)
{code}

> TestMultiColumnScanner cost long time when using ROWCOL bloom type
> --
>
> Key: HBASE-21520
> URL: https://issues.apache.org/jira/browse/HBASE-21520
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: rowcol.txt
>
>
> The TestMultiColumnScanner is easy to be timeout,  you can see HBASE-21517.   
> In my localhost,  when I set the parameters to be { 
> Compression.Algorithm.NONE, BloomType.ROW, false },  it took about 5 seconds. 
>  but if I set the parameters to be  { Compression.Algorithm.NONE, 
> BloomType.ROWCOL, false },  it would take about 45 seconds, which means 
> ROWCOL cost much more time than ROW.
> Need to find out what's wrong with this ut.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21594) Requested block is out of range when reading hfile

2018-12-13 Thread ChenKai (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16719937#comment-16719937
 ] 

ChenKai commented on HBASE-21594:
-

[~openinx] HBase version is 0.98.13-hadoop2, Phoenix version is 
4.7.0-HBase-0.98. I write a unit test with hbase(branch-1.2), same problems. 
Thanks.

> Requested block is out of range when reading hfile
> --
>
> Key: HBASE-21594
> URL: https://issues.apache.org/jira/browse/HBASE-21594
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.10
>Reporter: ChenKai
>Priority: Major
>
> My HFiles are generated by Spark HBaseBulkLoad. And then when i read a few of 
> them(or hbase do compact), i encounter the following exceptions.
>  
> {code:java}
> Exception in thread "main" java.io.IOException: Requested block is out of 
> range: 77329641, lastDataBlockOffset: 77329641, 
> trailer.getLoadOnOpenDataOffset: 77329641
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:396)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:734)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.isNextBlock(HFileReaderV2.java:859)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.positionForNextBlock(HFileReaderV2.java:854)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2._next(HFileReaderV2.java:871)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:891)
> at io.patamon.hbase.test.read.TestHFileRead.main(TestHFileRead.java:49)
> {code}
> Looks like `lastDataBlockOffset` is equals to 
> `trailer.getLoadOnOpenDataOffset`. Could anyone help me? Thanks very much.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21276) hbase scan operation cannot scan some rowkey

2018-12-13 Thread xiaolerzheng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21276?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xiaolerzheng updated HBASE-21276:
-
Attachment: image-2018-12-13-17-05-40-743.png

> hbase scan operation  cannot scan some rowkey
> -
>
> Key: HBASE-21276
> URL: https://issues.apache.org/jira/browse/HBASE-21276
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.3
>Reporter: xiaolerzheng
>Priority: Major
> Attachments: image-2018-12-13-17-05-40-743.png
>
>
> the table ZEUS.LOAN_CONSUMER_CONTACT has some row,
> we can get the row from "get",
> but we cannot scan it from scan mr and neither can get the row from "scan"  
> with timestamp, nor can get the row from 'scan' with timestamp and startrow
> hbase(main):001:0> get 'ZEUS.LOAN_CONSUMER_CONTACT', 
> '72520206##139'
>  COLUMN CELL
>  0:BINLOG_TIME timestamp=1537254107291, value=\x80\x00\x01e\xEB|%I
>  0:CREATE_TIME timestamp=1537254107291, value=2018-09-18 15:01:46
>  0:LAST_MOD_TIME timestamp=1537254107291, value=2018-09-18 15:01:46
>  0:PHONE_NO timestamp=1537254107291, value=139
>  0:SOURCE timestamp=1537254107291, value=\x80\x00\x00\x01
>  0:UID timestamp=1537254107291, value=\x80\x00\x00\x00\x04R\x92\x0E
>  0:USER_NAME timestamp=1537254107291, value=
>  0:_0 timestamp=1537254107291, value=x
>  8 row(s) in 0.2280 seconds
>  
>  
> hbase(main):002:0> scan 'ZEUS.LOAN_CONSUMER_CONTACT',
> { TIMERANGE => [1537254107291, 1537254107293]}
> ROW COLUMN+CELL
>  0 row(s) in 1410.9010 seconds
> hbase(main):003:0> scan 'ZEUS.LOAN_CONSUMER_CONTACT',
> { TIMERANGE => [1537254107280, 1537254107294]}
> ROW COLUMN+CELL
>  0 row(s) in 1410.5480 seconds
>  
> hbase(main):004:0> scan 'ZEUS.LOAN_CONSUMER_CONTACT',
> { STARTROW => '72520206##139', TIMERANGE => [1537254107280, 
> 1537254107294]}
> ROW COLUMN+CELL
>  72520206##139 column=0:BINLOG_TIME, timestamp=1537254107291, 
> value=\x80\x00\x01e\xEB|%I
>  72520206##139 column=0:CREATE_TIME, timestamp=1537254107291, 
> value=2018-09-18 15:01:46
>  72520206##139 column=0:LAST_MOD_TIME, timestamp=1537254107291, 
> value=2018-09-18 15:01:46
>  72520206##139 column=0:PHONE_NO, timestamp=1537254107291, 
> value=139
>  72520206##139 column=0:SOURCE, timestamp=1537254107291, 
> value=\x80\x00\x00\x01
>  72520206##139 column=0:UID, timestamp=1537254107291, 
> value=\x80\x00\x00\x00\x04R\x92\x0E
>  72520206##139 column=0:USER_NAME, timestamp=1537254107291, 
> value=
>  72520206##139 column=0:_0, timestamp=1537254107291, value=x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21276) hbase scan operation cannot scan some rowkey

2018-12-13 Thread xiaolerzheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21276?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16719948#comment-16719948
 ] 

xiaolerzheng commented on HBASE-21276:
--

!http://git.caimi-inc.com/StanLee/data-platform/uploads/c060c73ad61e8bb82541c657847eb34e/image.png!

 

 

!http://git.caimi-inc.com/StanLee/data-platform/uploads/9e5f1c52ffa0eade328ae217cc197178/image.png!

just set true,show be set false in 

!http://git.caimi-inc.com/StanLee/data-platform/uploads/744f969d70b34492e7894484fdc177ce/image.png!

like this:

!image-2018-12-13-17-05-40-743.png!

 

 

> hbase scan operation  cannot scan some rowkey
> -
>
> Key: HBASE-21276
> URL: https://issues.apache.org/jira/browse/HBASE-21276
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.3
>Reporter: xiaolerzheng
>Priority: Major
> Attachments: image-2018-12-13-17-05-40-743.png
>
>
> the table ZEUS.LOAN_CONSUMER_CONTACT has some row,
> we can get the row from "get",
> but we cannot scan it from scan mr and neither can get the row from "scan"  
> with timestamp, nor can get the row from 'scan' with timestamp and startrow
> hbase(main):001:0> get 'ZEUS.LOAN_CONSUMER_CONTACT', 
> '72520206##139'
>  COLUMN CELL
>  0:BINLOG_TIME timestamp=1537254107291, value=\x80\x00\x01e\xEB|%I
>  0:CREATE_TIME timestamp=1537254107291, value=2018-09-18 15:01:46
>  0:LAST_MOD_TIME timestamp=1537254107291, value=2018-09-18 15:01:46
>  0:PHONE_NO timestamp=1537254107291, value=139
>  0:SOURCE timestamp=1537254107291, value=\x80\x00\x00\x01
>  0:UID timestamp=1537254107291, value=\x80\x00\x00\x00\x04R\x92\x0E
>  0:USER_NAME timestamp=1537254107291, value=
>  0:_0 timestamp=1537254107291, value=x
>  8 row(s) in 0.2280 seconds
>  
>  
> hbase(main):002:0> scan 'ZEUS.LOAN_CONSUMER_CONTACT',
> { TIMERANGE => [1537254107291, 1537254107293]}
> ROW COLUMN+CELL
>  0 row(s) in 1410.9010 seconds
> hbase(main):003:0> scan 'ZEUS.LOAN_CONSUMER_CONTACT',
> { TIMERANGE => [1537254107280, 1537254107294]}
> ROW COLUMN+CELL
>  0 row(s) in 1410.5480 seconds
>  
> hbase(main):004:0> scan 'ZEUS.LOAN_CONSUMER_CONTACT',
> { STARTROW => '72520206##139', TIMERANGE => [1537254107280, 
> 1537254107294]}
> ROW COLUMN+CELL
>  72520206##139 column=0:BINLOG_TIME, timestamp=1537254107291, 
> value=\x80\x00\x01e\xEB|%I
>  72520206##139 column=0:CREATE_TIME, timestamp=1537254107291, 
> value=2018-09-18 15:01:46
>  72520206##139 column=0:LAST_MOD_TIME, timestamp=1537254107291, 
> value=2018-09-18 15:01:46
>  72520206##139 column=0:PHONE_NO, timestamp=1537254107291, 
> value=139
>  72520206##139 column=0:SOURCE, timestamp=1537254107291, 
> value=\x80\x00\x00\x01
>  72520206##139 column=0:UID, timestamp=1537254107291, 
> value=\x80\x00\x00\x00\x04R\x92\x0E
>  72520206##139 column=0:USER_NAME, timestamp=1537254107291, 
> value=
>  72520206##139 column=0:_0, timestamp=1537254107291, value=x



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21594) Requested block is out of range when reading hfile

2018-12-13 Thread ChenKai (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16719937#comment-16719937
 ] 

ChenKai edited comment on HBASE-21594 at 12/13/18 9:04 AM:
---

[~openinx] HBase version is 0.98.13-hadoop2, Phoenix version is 
4.7.0-HBase-0.98. I write a unit test with hbase(branch-1.2), same problems, 
i'm not sure if hfile is incorrect? Thanks.


was (Author: 514793...@qq.com):
[~openinx] HBase version is 0.98.13-hadoop2, Phoenix version is 
4.7.0-HBase-0.98. I write a unit test with hbase(branch-1.2), same problems. 
Thanks.

> Requested block is out of range when reading hfile
> --
>
> Key: HBASE-21594
> URL: https://issues.apache.org/jira/browse/HBASE-21594
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.10
>Reporter: ChenKai
>Priority: Major
>
> My HFiles are generated by Spark HBaseBulkLoad. And then when i read a few of 
> them(or hbase do compact), i encounter the following exceptions.
>  
> {code:java}
> Exception in thread "main" java.io.IOException: Requested block is out of 
> range: 77329641, lastDataBlockOffset: 77329641, 
> trailer.getLoadOnOpenDataOffset: 77329641
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:396)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:734)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.isNextBlock(HFileReaderV2.java:859)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.positionForNextBlock(HFileReaderV2.java:854)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2._next(HFileReaderV2.java:871)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:891)
> at io.patamon.hbase.test.read.TestHFileRead.main(TestHFileRead.java:49)
> {code}
> Looks like `lastDataBlockOffset` is equals to 
> `trailer.getLoadOnOpenDataOffset`. Could anyone help me? Thanks very much.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21520) TestMultiColumnScanner cost long time when using ROWCOL bloom type

2018-12-13 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16719956#comment-16719956
 ] 

Zheng Hu commented on HBASE-21520:
--

I tried to make the following comment, and run the UT again, found it run so 
fast .. 
{code}
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
index b5b853a..a5a3006 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileScanner.java
@@ -396,7 +396,7 @@ public class StoreFileScanner implements KeyValueScanner {
 if (useBloom) {
   // check ROWCOL Bloom filter first.
   if (reader.getBloomFilterType() == BloomType.ROWCOL) {
-haveToSeek = reader.passesGeneralRowColBloomFilter(kv);
+// haveToSeek = reader.passesGeneralRowColBloomFilter(kv);
   } else if (canOptimizeForNonNullColumn
   && ((PrivateCellUtil.isDeleteFamily(kv)
   || PrivateCellUtil.isDeleteFamilyVersion(kv {
{code}

 !TestMultiColumnScanner.png! 

> TestMultiColumnScanner cost long time when using ROWCOL bloom type
> --
>
> Key: HBASE-21520
> URL: https://issues.apache.org/jira/browse/HBASE-21520
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: TestMultiColumnScanner.png, rowcol.txt
>
>
> The TestMultiColumnScanner is easy to be timeout,  you can see HBASE-21517.   
> In my localhost,  when I set the parameters to be { 
> Compression.Algorithm.NONE, BloomType.ROW, false },  it took about 5 seconds. 
>  but if I set the parameters to be  { Compression.Algorithm.NONE, 
> BloomType.ROWCOL, false },  it would take about 45 seconds, which means 
> ROWCOL cost much more time than ROW.
> Need to find out what's wrong with this ut.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21520) TestMultiColumnScanner cost long time when using ROWCOL bloom type

2018-12-13 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-21520:
-
Attachment: TestMultiColumnScanner.png

> TestMultiColumnScanner cost long time when using ROWCOL bloom type
> --
>
> Key: HBASE-21520
> URL: https://issues.apache.org/jira/browse/HBASE-21520
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: TestMultiColumnScanner.png, rowcol.txt
>
>
> The TestMultiColumnScanner is easy to be timeout,  you can see HBASE-21517.   
> In my localhost,  when I set the parameters to be { 
> Compression.Algorithm.NONE, BloomType.ROW, false },  it took about 5 seconds. 
>  but if I set the parameters to be  { Compression.Algorithm.NONE, 
> BloomType.ROWCOL, false },  it would take about 45 seconds, which means 
> ROWCOL cost much more time than ROW.
> Need to find out what's wrong with this ut.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21520) TestMultiColumnScanner cost long time when using ROWCOL bloom type

2018-12-13 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-21520:
-
Attachment: HBASE-21520.v1.patch

> TestMultiColumnScanner cost long time when using ROWCOL bloom type
> --
>
> Key: HBASE-21520
> URL: https://issues.apache.org/jira/browse/HBASE-21520
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-21520.v1.patch, TestMultiColumnScanner.png, 
> rowcol.txt
>
>
> The TestMultiColumnScanner is easy to be timeout,  you can see HBASE-21517.   
> In my localhost,  when I set the parameters to be { 
> Compression.Algorithm.NONE, BloomType.ROW, false },  it took about 5 seconds. 
>  but if I set the parameters to be  { Compression.Algorithm.NONE, 
> BloomType.ROWCOL, false },  it would take about 45 seconds, which means 
> ROWCOL cost much more time than ROW.
> Need to find out what's wrong with this ut.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21593) closing flags show be set false in HRegion

2018-12-13 Thread xiaolerzheng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16719887#comment-16719887
 ] 

xiaolerzheng edited comment on HBASE-21593 at 12/13/18 9:43 AM:


I met some strange problem:

when I export some row putted in [start_time, end_time) using command:  

$hbase org.apache.hadoop.hbase.mapreduce.Export RAW.LOAN_APP_FLOW 
hdfs://10.1.170.1:8020/tmp/export/loan_app_flow 1 154385280 154393920

 no result was return

 

 

 

 

18/12/12 14:54:23 INFO mapreduce.Job: Counters: 20
 File System Counters
 FILE: Number of bytes read=584860554
 FILE: Number of bytes written=592091874
 FILE: Number of read operations=0
 FILE: Number of large read operations=0
 FILE: Number of write operations=0
 HDFS: Number of bytes read=0
 HDFS: Number of bytes written=606
 HDFS: Number of read operations=27
 HDFS: Number of large read operations=0
 HDFS: Number of write operations=12
 Map-Reduce Framework
 {color:#ff}Map input records=0{color}
 {color:#ff} Map output records=0{color}
 Input split bytes=827
 Spilled Records=0
 Failed Shuffles=0
 Merged Map outputs=0
 GC time elapsed (ms)=7
 Total committed heap usage (bytes)=4102029312
 File Input Format Counters
 Bytes Read=0
 File Output Format Counter

 

 

 

debug log:

 

!image-2018-12-13-16-04-51-892.png!

 

!image-2018-12-13-16-05-09-246.png!

!image-2018-12-13-16-05-36-404.png!

 

 this happened when doing ETL from hbase to hive while at the same time, some 
manual split region happened for maintain hbase(split large region to small 
improving performance)


was (Author: xiaolerzheng):
I met some strange problem:

when I export some row putted in [start_time, end_time) using command:  

$hbase org.apache.hadoop.hbase.mapreduce.Export RAW.LOAN_APP_FLOW 
hdfs://10.1.170.1:8020/tmp/export/loan_app_flow 1 154385280 154393920

 no result was return

 

 

 

 

18/12/12 14:54:23 INFO mapreduce.Job: Counters: 20
 File System Counters
 FILE: Number of bytes read=584860554
 FILE: Number of bytes written=592091874
 FILE: Number of read operations=0
 FILE: Number of large read operations=0
 FILE: Number of write operations=0
 HDFS: Number of bytes read=0
 HDFS: Number of bytes written=606
 HDFS: Number of read operations=27
 HDFS: Number of large read operations=0
 HDFS: Number of write operations=12
 Map-Reduce Framework
 {color:#FF}Map input records=0{color}
{color:#FF} Map output records=0{color}
 Input split bytes=827
 Spilled Records=0
 Failed Shuffles=0
 Merged Map outputs=0
 GC time elapsed (ms)=7
 Total committed heap usage (bytes)=4102029312
 File Input Format Counters
 Bytes Read=0
 File Output Format Counter

 

 

 

debug log:

 

!image-2018-12-13-16-04-51-892.png!

 

!image-2018-12-13-16-05-09-246.png!

!image-2018-12-13-16-05-36-404.png!

 

 

 

 

 

> closing flags show be set false in HRegion
> --
>
> Key: HBASE-21593
> URL: https://issues.apache.org/jira/browse/HBASE-21593
> Project: HBase
>  Issue Type: Bug
>Reporter: xiaolerzheng
>Priority: Minor
> Attachments: image-2018-12-13-16-04-51-892.png, 
> image-2018-12-13-16-05-09-246.png, image-2018-12-13-16-05-36-404.png
>
>
> in HRegion.java
>  
>  
> 1429 // block waiting for the lock for closing
> 1430 lock.writeLock().lock();
> 1431 this.closing.set(true);
> 1432 status.setStatus("Disabling writes for close");
>  
> 
>  
>  
> 1557 } finally {
>        {color:#FF}  //should here add {color}
>     {color:#FF}    this.closing.set(false); {color}
> 1558  lock.writeLock().unlock();
> 1559 }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21582) If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then SnapshotHFileCleaner will skip to run every time

2018-12-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16719981#comment-16719981
 ] 

Hudson commented on HBASE-21582:


Results for branch branch-1.2
[build #586 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/586/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/586//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/586//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/586//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then 
> SnapshotHFileCleaner will skip to run every time
> --
>
> Key: HBASE-21582
> URL: https://issues.apache.org/jira/browse/HBASE-21582
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 2.1.2, 1.2.10, 1.4.10, 2.0.5
>
> Attachments: HBASE-21582.branch-1.v3.patch, HBASE-21582.v1.patch, 
> HBASE-21582.v2.patch, HBASE-21582.v3.patch
>
>
> This is because we remove the SnapshotSentinel  from snapshotHandlers in 
> SnapshotManager#cleanupSentinels.  Only when the following 3 case, the  
> cleanupSentinels will be called: 
> 1.  SnapshotManager#isSnapshotDone; 
> 2.  SnapshotManager#takeSnapshot; 
> 3. SnapshotManager#restoreOrCloneSnapshot
> So if no isSnapshotDone called, or no further snapshot taking, or snapshot 
> restore/clone.  the SnapshotSentinel will always be keep in snapshotHandlers. 
> But after HBASE-21387,  Only when no snapshot taking, the 
> SnapshotHFileCleaner will check the unref files and clean. 
> I found this bug, because in our XiaoMi branch-2,  we implement the soft 
> delete feature, which means if someone delete a table, then master will 
> create a snapshot firstly, after that, the table deletion begain.  the 
> implementation is quite simple, we use the snapshotManager to create a 
> snapshot. 
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> index 8f42e4a..6da6a64 100644
> --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> @@ -2385,12 +2385,6 @@ public class HMaster extends HRegionServer implements 
> MasterServices {
>protected void run() throws IOException {
>  getMaster().getMasterCoprocessorHost().preDeleteTable(tableName);
>  
> +if (snapshotBeforeDelete) {
> +  LOG.info("Take snaposhot for " + tableName + " before deleting");
> +  snapshotManager
> +  
> .takeSnapshot(SnapshotDescriptionUtils.getSnapshotNameForDeletedTable(tableName));
> +}
> +
>  LOG.info(getClientIdAuditPrefix() + " delete " + tableName);
>  
>  // TODO: We can handle/merge duplicate request
> {code}
> In the master,  I found the endless log after delete a table: 
> {code}
> org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache: Not checking 
> unreferenced files since snapshot is running, it will skip to clean the 
> HFiles this time
> {code}
> This is because the snapshotHandlers never be cleaned after call the  
> snapshotManager#takeSnapshot.  I think the asynSnapshot may has the same 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21592) quota.addGetResult(r) throw NPE

2018-12-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16719989#comment-16719989
 ] 

Hadoop QA commented on HBASE-21592:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
13s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
17s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m 20s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}130m 13s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.master.replication.TestRegisterPeerWorkerWhenRestarting |
|   | hadoop.hbase.regionserver.TestMultiColumnScanner |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21592 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951612/HBASE-21592.master.0001.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 7d57d5b7247d 4.4.0-139-generic #165~14.04.1-Ubuntu SMP Wed Oct 
31 10:55:11 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / f32d261843 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15269/artifact/patc

[jira] [Commented] (HBASE-21520) TestMultiColumnScanner cost long time when using ROWCOL bloom type

2018-12-13 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720001#comment-16720001
 ] 

Duo Zhang commented on HBASE-21520:
---

So what's the problem here?

> TestMultiColumnScanner cost long time when using ROWCOL bloom type
> --
>
> Key: HBASE-21520
> URL: https://issues.apache.org/jira/browse/HBASE-21520
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-21520.v1.patch, TestMultiColumnScanner.png, 
> rowcol.txt
>
>
> The TestMultiColumnScanner is easy to be timeout,  you can see HBASE-21517.   
> In my localhost,  when I set the parameters to be { 
> Compression.Algorithm.NONE, BloomType.ROW, false },  it took about 5 seconds. 
>  but if I set the parameters to be  { Compression.Algorithm.NONE, 
> BloomType.ROWCOL, false },  it would take about 45 seconds, which means 
> ROWCOL cost much more time than ROW.
> Need to find out what's wrong with this ut.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21520) TestMultiColumnScanner cost long time when using ROWCOL bloom type

2018-12-13 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720002#comment-16720002
 ] 

Zheng Hu commented on HBASE-21520:
--

Because we have 10 (NUM_FLUSHES=10)  hfiles here,  and the table will put ~1000 
cells ( rows=20, ts=6, qualifiers=8, total=20*6*8 ~ 1000) . Each full table 
scan will  check the ROWCOL bloom filter 20 (rows)* 8 (column) * 10 (hfiles)= 
1600 times.   we consider the avg full table scan cost  50ms , then each bloom 
filter calculation cost  50 (ms)/ 1600.0 = 0.031 ms ... 

> TestMultiColumnScanner cost long time when using ROWCOL bloom type
> --
>
> Key: HBASE-21520
> URL: https://issues.apache.org/jira/browse/HBASE-21520
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-21520.v1.patch, TestMultiColumnScanner.png, 
> rowcol.txt
>
>
> The TestMultiColumnScanner is easy to be timeout,  you can see HBASE-21517.   
> In my localhost,  when I set the parameters to be { 
> Compression.Algorithm.NONE, BloomType.ROW, false },  it took about 5 seconds. 
>  but if I set the parameters to be  { Compression.Algorithm.NONE, 
> BloomType.ROWCOL, false },  it would take about 45 seconds, which means 
> ROWCOL cost much more time than ROW.
> Need to find out what's wrong with this ut.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21595) Print thread's information and stack traces when RS is aborting forcibly

2018-12-13 Thread Pankaj Kumar (JIRA)
Pankaj Kumar created HBASE-21595:


 Summary: Print thread's information and stack traces when RS is 
aborting forcibly
 Key: HBASE-21595
 URL: https://issues.apache.org/jira/browse/HBASE-21595
 Project: HBase
  Issue Type: Improvement
  Components: regionserver
Affects Versions: 3.0.0, 2.2.0
Reporter: Pankaj Kumar


After HBASE-21325 RS terminate forcibly  on abort timeout.

We should print the thread info before terminating.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21595) Print thread's information and stack traces when RS is aborting forcibly

2018-12-13 Thread Pankaj Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-21595:
-
Priority: Major  (was: Minor)

> Print thread's information and stack traces when RS is aborting forcibly
> 
>
> Key: HBASE-21595
> URL: https://issues.apache.org/jira/browse/HBASE-21595
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Pankaj Kumar
>Priority: Major
>
> After HBASE-21325 RS terminate forcibly  on abort timeout.
> We should print the thread info before terminating, will be useful to analyze 
> the RS abort timeout problem.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21595) Print thread's information and stack traces when RS is aborting forcibly

2018-12-13 Thread Pankaj Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-21595:
-
Description: 
After HBASE-21325 RS terminate forcibly  on abort timeout.

We should print the thread info before terminating, will be useful to analyze 
the RS abort timeout problem.

 

  was:
After HBASE-21325 RS terminate forcibly  on abort timeout.

We should print the thread info before terminating.

 


> Print thread's information and stack traces when RS is aborting forcibly
> 
>
> Key: HBASE-21595
> URL: https://issues.apache.org/jira/browse/HBASE-21595
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Pankaj Kumar
>Priority: Minor
>
> After HBASE-21325 RS terminate forcibly  on abort timeout.
> We should print the thread info before terminating, will be useful to analyze 
> the RS abort timeout problem.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-21595) Print thread's information and stack traces when RS is aborting forcibly

2018-12-13 Thread Pankaj Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar reassigned HBASE-21595:


Assignee: Pankaj Kumar

> Print thread's information and stack traces when RS is aborting forcibly
> 
>
> Key: HBASE-21595
> URL: https://issues.apache.org/jira/browse/HBASE-21595
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Major
>
> After HBASE-21325 RS terminate forcibly  on abort timeout.
> We should print the thread info before terminating, will be useful to analyze 
> the RS abort timeout problem.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21589) TestCleanupMetaWAL fails

2018-12-13 Thread Allan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720012#comment-16720012
 ] 

Allan Yang commented on HBASE-21589:


[~stack], the output you attached seems like a successful run?

> TestCleanupMetaWAL fails
> 
>
> Key: HBASE-21589
> URL: https://issues.apache.org/jira/browse/HBASE-21589
> Project: HBase
>  Issue Type: Bug
>  Components: test, wal
>Reporter: stack
>Priority: Blocker
> Fix For: 2.1.2, 2.0.4
>
> Attachments: 
> org.apache.hadoop.hbase.regionserver.TestCleanupMetaWAL-output.txt
>
>
> This test fails near all-the-time. Sunk two RCs. Fix. Made it a blocker.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21535) Zombie Master detector is not working

2018-12-13 Thread Pankaj Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720020#comment-16720020
 ] 

Pankaj Kumar commented on HBASE-21535:
--

Stack Sir, QA report is fine, we can go for it now. Thanks :)

> Zombie Master detector is not working
> -
>
> Key: HBASE-21535
> URL: https://issues.apache.org/jira/browse/HBASE-21535
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 3.0.0, 2.2.0, 2.1.1, 2.0.3
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Critical
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-21535.branch-2.patch, HBASE-21535.branch-2.patch, 
> HBASE-21535.patch
>
>
> We have InitializationMonitor thread in HMaster which detects Zombie Hmaster 
> based on _hbase.master.initializationmonitor.timeout _and halts if 
> _hbase.master.initializationmonitor.haltontimeout_ set _true_.
> After HBASE-19694, HMaster initialization order was correted. Hmaster is set 
> active after Initializing ZK system trackers as follows,
> {noformat}
>  status.setStatus("Initializing ZK system trackers");
>  initializeZKBasedSystemTrackers();
>  status.setStatus("Loading last flushed sequence id of regions");
>  try {
>  this.serverManager.loadLastFlushedSequenceIds();
>  } catch (IOException e) {
>  LOG.debug("Failed to load last flushed sequence id of regions"
>  + " from file system", e);
>  }
>  // Set ourselves as active Master now our claim has succeeded up in zk.
>  this.activeMaster = true;
> {noformat}
> But Zombie detector thread is started at the begining phase of 
> finishActiveMasterInitialization(),
> {noformat}
>  private void finishActiveMasterInitialization(MonitoredTask status) throws 
> IOException,
>  InterruptedException, KeeperException, ReplicationException {
>  Thread zombieDetector = new Thread(new InitializationMonitor(this),
>  "ActiveMasterInitializationMonitor-" + System.currentTimeMillis());
>  zombieDetector.setDaemon(true);
>  zombieDetector.start();
> {noformat}
> During zombieDetector execution "master.isActiveMaster()" will be false, so 
> it won't wait and cant detect zombie master.
> {noformat}
>  @Override
>  public void run() {
>  try {
>  while (!master.isStopped() && master.isActiveMaster()) {
>  Thread.sleep(timeout);
>  if (master.isInitialized()) {
>  LOG.debug("Initialization completed within allotted tolerance. Monitor 
> exiting.");
>  } else {
>  LOG.error("Master failed to complete initialization after " + timeout + "ms. 
> Please"
>  + " consider submitting a bug report including a thread dump of this 
> process.");
>  if (haltOnTimeout) {
>  LOG.error("Zombie Master exiting. Thread dump to stdout");
>  Threads.printThreadInfo(System.out, "Zombie HMaster");
>  System.exit(-1);
>  }
>  }
>  }
>  } catch (InterruptedException ie) {
>  LOG.trace("InitMonitor thread interrupted. Existing.");
>  }
>  }
>  }
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21519) Namespace region is never assigned in a HM failover scenario and HM abort always due to init timeout

2018-12-13 Thread Pankaj Kumar (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720015#comment-16720015
 ] 

Pankaj Kumar commented on HBASE-21519:
--

Ping [~anoop.hbase] [~Apache9] Kindly review this.

> Namespace region is never assigned in a HM failover scenario and HM abort 
> always due to init timeout
> 
>
> Key: HBASE-21519
> URL: https://issues.apache.org/jira/browse/HBASE-21519
> Project: HBase
>  Issue Type: Bug
>  Components: master, wal
>Affects Versions: 2.1.1
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Critical
> Fix For: 2.2.0
>
> Attachments: HBASE-21519.branch-2.patch
>
>
> In our test env we found that namespace region is never be assigned on HM 
> failover scenario when multiwal feature is enabled,
> {noformat}
> 2018-11-28 01:38:28,085 WARN [master/HM-1:16000:becomeActiveMaster] 
> master.HMaster: 
> hbase:namespace,,1543339859614.31f6d3383af09e18e1e81ca02a93de15. is NOT 
> online; state=\{31f6d3383af09e18e1e81ca02a93de15 state=OPEN, 
> ts=1543340156928, server=RS-2,16020,1543339824397}; 
> ServerCrashProcedures=false. Master startup cannot progress, in 
> holding-pattern until region onlined.
> {noformat}
> And finally HM abort with following error,
> {noformat}
> 2018-11-28 01:39:16,858 ERROR 
> [ActiveMasterInitializationMonitor-1543338648565] master.HMaster: Master 
> failed to complete initialization after 24ms. Please consider submitting 
> a bug report including a thread dump of this process.
> 2018-11-28 01:39:18,980 ERROR 
> [ActiveMasterInitializationMonitor-1543338648565] master.HMaster: Zombie 
> Master exiting. Thread dump to stdout
> {noformat}
> Stack trace:
> {noformat}
> Thread 102 (master/HM-1:16000:becomeActiveMaster):
>  State: TIMED_WAITING
>  Blocked count: 100
>  Waited count: 246
>  Stack:
>  java.lang.Thread.sleep(Native Method)
>  org.apache.hadoop.hbase.util.Threads.sleep(Threads.java:148)
>  org.apache.hadoop.hbase.master.HMaster.isRegionOnline(HMaster.java:1166)
>  
> org.apache.hadoop.hbase.master.HMaster.waitForNamespaceOnline(HMaster.java:1187)
>  
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:1044)
>  
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2285)
>  org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:590)
>  org.apache.hadoop.hbase.master.HMaster$$Lambda$40/1078246575.run(Unknown 
> Source)
>  java.lang.Thread.run(Thread.java:745)
> {noformat}
>  
> Step to reproduce:
>  1) Setup a HBase cluster with 1/2 HM (say HM-1) and 2 RS(say RS-1 & RS-2)
>  2) Enable multiwal feature with following configuration setting and start 
> the cluster,
> {noformat}
>  
>  hbase.wal.provider
>  multiwal
>  
> 
>  hbase.wal.regiongrouping.strategy
>  identity
>  
> {noformat}
> 3) Make sure meta and namespace regions are assigned on different RS, suppose 
> RS-1 & RS-2 respectively.
>  4) Create table 't1' 
>  5) Flush the meta table explicitly
>  6) Kill the RS-2, so during RS-2 SCP all regions including namespace region 
> will be assigned to RS-1.
>  7) Now Kill RS-1 before meta flush happen. Here both RS-2 & RS-1 are 
> shutdown now.
>  8) Stop the HM and start RS-1 & RS-2.
>  9) Now start the HM.
> Meta region is assigned successfully but HM is keep waiting for the namespace 
> region onlline (Master startup cannot progress, in holding-pattern until 
> region onlined) and abort with timeout.
> Observation:
>  1) After step-3 namespace region was assigned to RS-2 and meta entry was as 
> follows,
> {noformat}
>  hbase:namespace,,1543339859614.31f6d3383af09e18e1e81ca02a93de15. 
> column=info:server, timestamp=1543339860920, value=RS-2:16020
>  hbase:namespace,,1543339859614.31f6d3383af09e18e1e81ca02a93de15. 
> column=info:serverstartcode, timestamp=1543339860920, value=1543339824397
> {noformat}
> 2) After step-6 namespace region was assigned to RS-1 and meta entry was as 
> follows,
> {noformat}
>  hbase:namespace,,1543339859614.31f6d3383af09e18e1e81ca02a93de15. 
> column=info:server, timestamp=1543339880920, value=RS-1:16020
>  hbase:namespace,,1543339859614.31f6d3383af09e18e1e81ca02a93de15. 
> column=info:serverstartcode, timestamp=1543339880920, value=1543339829288
> {noformat}
> 3) After Step-9, meta entry for namespace region was as follows,
> {noformat}
>  hbase:namespace,,1543339859614.31f6d3383af09e18e1e81ca02a93de15. 
> column=info:server, timestamp=1543339860920, value=RS-2:16020
>  hbase:namespace,,1543339859614.31f6d3383af09e18e1e81ca02a93de15. 
> column=info:serverstartcode, timestamp=1543339860920, value=1543339824397
> {noformat}
> During SCP we do meta log split based on filter,
> {noformat}
>  /**
>  * Specialized method to

[jira] [Updated] (HBASE-21595) Print thread's information and stack traces when RS is aborting forcibly

2018-12-13 Thread Pankaj Kumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pankaj Kumar updated HBASE-21595:
-
Priority: Minor  (was: Major)

> Print thread's information and stack traces when RS is aborting forcibly
> 
>
> Key: HBASE-21595
> URL: https://issues.apache.org/jira/browse/HBASE-21595
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Minor
>
> After HBASE-21325 RS terminate forcibly  on abort timeout.
> We should print the thread info before terminating, will be useful to analyze 
> the RS abort timeout problem.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20695) Implement table level RegionServer replication metrics

2018-12-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720027#comment-16720027
 ] 

Hudson commented on HBASE-20695:


Results for branch branch-1.4
[build #586 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Implement table level RegionServer replication metrics 
> ---
>
> Key: HBASE-20695
> URL: https://issues.apache.org/jira/browse/HBASE-20695
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Fix For: 2.1.0
>
> Attachments: HBASE-20695.master.001.patch, 
> HBASE-20695.master.002.patch, HBASE-20695.master.003.patch, 
> HBASE-20695.master.004.patch, HBASE-20695.master.005.patch, 
> HBASE-20695.master.006.patch, HBASE-20695.master.007.patch, 
> HBASE-20695.master.008.patch, HBASE-20695.master.009.patch, 
> HBASE-20695.master.010.patch
>
>
> Region server metrics now are mainly global metrics. It would be nice to have 
> table level metrics such as table level source.AgeOfLastShippedOp to indicate 
> operators which table's replication is lagging behind.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20672) New metrics ReadRequestRate and WriteRequestRate

2018-12-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720025#comment-16720025
 ] 

Hudson commented on HBASE-20672:


Results for branch branch-1.4
[build #586 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> New metrics ReadRequestRate and WriteRequestRate
> 
>
> Key: HBASE-20672
> URL: https://issues.apache.org/jira/browse/HBASE-20672
> Project: HBase
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Ankit Jain
>Assignee: Ankit Jain
>Priority: Minor
> Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 1.4.10
>
> Attachments: HBASE-20672.branch-1.001.patch, 
> HBASE-20672.branch-1.002.patch, HBASE-20672.branch-2.001.patch, 
> HBASE-20672.master.001.patch, HBASE-20672.master.002.patch, 
> HBASE-20672.master.003.patch, hits1vs2.4.40.400.png
>
>
> Hbase currently provides counter read/write requests (ReadRequestCount, 
> WriteRequestCount). That said it is not easy to use counter that reset only 
> after a restart of the service, we would like to expose 2 new metrics in 
> HBase to provide ReadRequestRate and WriteRequestRate at region server level.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20047) AuthenticationTokenIdentifier should provide a toString

2018-12-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720028#comment-16720028
 ] 

Hudson commented on HBASE-20047:


Results for branch branch-1.4
[build #586 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> AuthenticationTokenIdentifier should provide a toString
> ---
>
> Key: HBASE-20047
> URL: https://issues.apache.org/jira/browse/HBASE-20047
> Project: HBase
>  Issue Type: Improvement
>  Components: Usability
>Reporter: Sean Busbey
>Assignee: maoling
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0, 2.1.0, 1.5.0, 1.3.3, 1.4.10
>
> Attachments: HBASE-20047.master.v0.patch, HBASE-20047.master.v1.patch
>
>
> It'd be easier to debug things like MapReduce and Spark jobs if our 
> AuthenticationTokenIdentifier provided a toString method.
> For comparison, here's an example of a MapReduce job that has both an HDFS 
> delegation token and our delegation token:
> {code:java}
> 18/02/21 20:40:06 INFO mapreduce.JobSubmitter: Kind: HBASE_AUTH_TOKEN, 
> Service: 92a63bd8-9e00-4c04-ab61-da8e606068e1, Ident: 
> (org.apache.hadoop.hbase.security.token.AuthenticationTokenIdentifier@17)
> 18/02/21 20:40:06 INFO mapreduce.JobSubmitter: Kind: HDFS_DELEGATION_TOKEN, 
> Service: 172.31.118.118:8020, Ident: (token for some_user: 
> HDFS_DELEGATION_TOKEN owner=some_u...@example.com, renewer=yarn, realUser=, 
> issueDate=1519274405003, maxDate=1519879205003, sequenceNumber=23, 
> masterKeyId=9)
> {code}
> Stuff in TokenIdentifier is supposed to be public, so we should be fine to 
> dump everything, similar to Hadoop's AbstractDelegationTokenIdentifier.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20858) port HBASE-20695 to branch-1

2018-12-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720026#comment-16720026
 ] 

Hudson commented on HBASE-20858:


Results for branch branch-1.4
[build #586 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> port HBASE-20695 to branch-1
> 
>
> Key: HBASE-20858
> URL: https://issues.apache.org/jira/browse/HBASE-20858
> Project: HBase
>  Issue Type: Improvement
>Reporter: Xu Cang
>Assignee: Xu Cang
>Priority: Minor
> Fix For: 1.5.0, 1.3.3, 1.4.10
>
> Attachments: HBASE-20858.branch-1.001.patch, 
> HBASE-20858.branch-1.002.patch
>
>
> port HBASE-20695 to branch-1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21582) If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then SnapshotHFileCleaner will skip to run every time

2018-12-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720029#comment-16720029
 ] 

Hudson commented on HBASE-21582:


Results for branch branch-1.4
[build #586 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/586//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then 
> SnapshotHFileCleaner will skip to run every time
> --
>
> Key: HBASE-21582
> URL: https://issues.apache.org/jira/browse/HBASE-21582
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 2.1.2, 1.2.10, 1.4.10, 2.0.5
>
> Attachments: HBASE-21582.branch-1.v3.patch, HBASE-21582.v1.patch, 
> HBASE-21582.v2.patch, HBASE-21582.v3.patch
>
>
> This is because we remove the SnapshotSentinel  from snapshotHandlers in 
> SnapshotManager#cleanupSentinels.  Only when the following 3 case, the  
> cleanupSentinels will be called: 
> 1.  SnapshotManager#isSnapshotDone; 
> 2.  SnapshotManager#takeSnapshot; 
> 3. SnapshotManager#restoreOrCloneSnapshot
> So if no isSnapshotDone called, or no further snapshot taking, or snapshot 
> restore/clone.  the SnapshotSentinel will always be keep in snapshotHandlers. 
> But after HBASE-21387,  Only when no snapshot taking, the 
> SnapshotHFileCleaner will check the unref files and clean. 
> I found this bug, because in our XiaoMi branch-2,  we implement the soft 
> delete feature, which means if someone delete a table, then master will 
> create a snapshot firstly, after that, the table deletion begain.  the 
> implementation is quite simple, we use the snapshotManager to create a 
> snapshot. 
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> index 8f42e4a..6da6a64 100644
> --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> @@ -2385,12 +2385,6 @@ public class HMaster extends HRegionServer implements 
> MasterServices {
>protected void run() throws IOException {
>  getMaster().getMasterCoprocessorHost().preDeleteTable(tableName);
>  
> +if (snapshotBeforeDelete) {
> +  LOG.info("Take snaposhot for " + tableName + " before deleting");
> +  snapshotManager
> +  
> .takeSnapshot(SnapshotDescriptionUtils.getSnapshotNameForDeletedTable(tableName));
> +}
> +
>  LOG.info(getClientIdAuditPrefix() + " delete " + tableName);
>  
>  // TODO: We can handle/merge duplicate request
> {code}
> In the master,  I found the endless log after delete a table: 
> {code}
> org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache: Not checking 
> unreferenced files since snapshot is running, it will skip to clean the 
> HFiles this time
> {code}
> This is because the snapshotHandlers never be cleaned after call the  
> snapshotManager#takeSnapshot.  I think the asynSnapshot may has the same 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21505) Several inconsistencies on information reported for Replication Sources by hbase shell status 'replication' command.

2018-12-13 Thread Wellington Chevreuil (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720041#comment-16720041
 ] 

Wellington Chevreuil commented on HBASE-21505:
--

[~tianjingyun], [~openinx], let me know on your thoughts regarding the 
proposal. If those info are ok, we could maybe reflect some of those on the 
current UI as well (on a separate jira)?

> Several inconsistencies on information reported for Replication Sources by 
> hbase shell status 'replication' command.
> 
>
> Key: HBASE-21505
> URL: https://issues.apache.org/jira/browse/HBASE-21505
> Project: HBase
>  Issue Type: Bug
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Attachments: 
> 0001-HBASE-21505-initial-version-for-more-detailed-report.patch, 
> HBASE-21505-master.001.patch, HBASE-21505-master.002.patch, 
> HBASE-21505-master.003.patch, HBASE-21505-master.004.patch, 
> HBASE-21505-master.005.patch
>
>
> While reviewing hbase shell status 'replication' command, noticed the 
> following issues related to replication source section:
> 1) TimeStampsOfLastShippedOp keeps getting updated and increasing even when 
> no new edits were added to source, so nothing was really shipped. Test steps 
> performed:
> 1.1) Source cluster with only one table targeted to replication;
> 1.2) Added a new row, confirmed the row appeared in Target cluster;
> 1.3) Issued status 'replication' command in source, TimeStampsOfLastShippedOp 
> shows current timestamp T1.
> 1.4) Waited 30 seconds, no new data added to source. Issued status 
> 'replication' command, now shows timestamp T2.
> 2) When replication is stuck due some connectivity issues or target 
> unavailability, if new edits are added in source, reported AgeOfLastShippedOp 
> is wrongly showing same value as "Replication Lag". This is incorrect, 
> AgeOfLastShippedOp should not change until there's indeed another edit 
> shipped to target. Test steps performed:
> 2.1) Source cluster with only one table targeted to replication;
> 2.2) Stopped target cluster RS;
> 2.3) Put a new row on source. Running status 'replication' command does show 
> lag increasing. TimeStampsOfLastShippedOp seems correct also, no further 
> updates as described on bullet #1 above.
> 2.4) AgeOfLastShippedOp keeps increasing together with Replication Lag, even 
> though there's no new edit shipped to target:
> {noformat}
> ...
>  SOURCE: PeerID=1, AgeOfLastShippedOp=5581, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=5581
> ...
> ...
> SOURCE: PeerID=1, AgeOfLastShippedOp=8586, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=8586
> ...
> {noformat}
> 3) AgeOfLastShippedOp gets set to 0 even when a given edit had taken some 
> time before it got finally shipped to target. Test steps performed:
> 3.1) Source cluster with only one table targeted to replication;
> 3.2) Stopped target cluster RS;
> 3.3) Put a new row on source. 
> 3.4) AgeOfLastShippedOp keeps increasing together with Replication Lag, even 
> though there's no new edit shipped to target:
> {noformat}
> T1:
> ...
>  SOURCE: PeerID=1, AgeOfLastShippedOp=5581, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=5581
> ...
> T2:
> ...
> SOURCE: PeerID=1, AgeOfLastShippedOp=8586, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=8586
> ...
> {noformat}
> 3.5) Restart target cluster RS and verified the new row appeared there. No 
> new edit added, but status 'replication' command reports AgeOfLastShippedOp 
> as 0, while it should be the diff between the time it concluded shipping at 
> target and the time it was added in source:
> {noformat}
> SOURCE: PeerID=1, AgeOfLastShippedOp=0, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=0
> {noformat}
> 4) When replication is stuck due some connectivity issues or target 
> unavailability, if RS is restarted, once recovered queue source is started, 
> TimeStampsOfLastShippedOp is set to initial java date (Thu Jan 01 01:00:00 
> GMT 1970, for example), thus "Replication Lag" also gives a complete 
> inaccurate value. 
> Tests performed:
> 4.1) Source cluster with only one table targeted to replication;
> 4.2) Stopped target cluster RS;
> 4.3) Put a new row on source, restart RS on source, waited a few seconds for 
> recovery queue source to startup, then it gives:
> {noformat}
> SOURCE: PeerID=1, AgeOfLastShippedOp=0, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Thu Jan 01 01:00:00 GMT 1970, Replication 
> Lag=9223372036854775807
> {noformat}
> Also, we should repo

[jira] [Updated] (HBASE-21410) A helper page that help find all problematic regions and procedures

2018-12-13 Thread Jingyun Tian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-21410:
-
Release Note: 
After HBASE-21410, we add a helper page to Master UI. This helper page is 
mainly to help hbase operator quickly found all regions and pids that are get 
stuck.
There are 2 entries to get in this page.
One is showing in the Regions in Transition section, it made *num region(s) in 
transition* a link that you can check all regions in transition and their 
related procedure IDs.
The other one is showing in the table details section, it made the number of 
CLOSING or OPENING regions a link, which you can check regions and related 
procedure IDs of CLOSING or OPENING regions of a certain table.
In this helper page, not only you can see all regions and related procedures, 
there are 2 buttons at the top which will show these regions or procedure IDs 
in text format. This is mainly aim to help operator to easily copy and paste 
all problematic procedure IDs and encoded region names to HBCK2's command line, 
by which can bypass these procedures or assign these regions.

> A helper page that help find all problematic regions and procedures
> ---
>
> Key: HBASE-21410
> URL: https://issues.apache.org/jira/browse/HBASE-21410
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0, 2.1.1
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2
>
> Attachments: HBASE-21410.branch-2.1.001.patch, 
> HBASE-21410.branch-2.1.002.patch, HBASE-21410.master.001.patch, 
> HBASE-21410.master.002.patch, HBASE-21410.master.003.patch, 
> HBASE-21410.master.004.patch, Screenshot from 2018-10-30 19-06-21.png, 
> Screenshot from 2018-10-30 19-06-42.png, Screenshot from 2018-10-31 
> 10-11-38.png, Screenshot from 2018-10-31 10-11-56.png, Screenshot from 
> 2018-11-01 17-56-02.png, Screenshot from 2018-11-01 17-56-15.png
>
>
> *This page is mainly focus on finding the regions stuck in some state that 
> cannot be assigned. My proposal of the page is as follows:*
> !Screenshot from 2018-10-30 19-06-21.png!
> *From this page we can see all regions in RIT queue and their related 
> procedures. If we can determine that these regions' state are abnormal, we 
> can click the link 'Procedures as TXT' to get a full list of procedure IDs to 
> bypass them. Then click 'Regions as TXT' to get a full list of encoded region 
> names to assign.*
> !Screenshot from 2018-10-30 19-06-42.png!
> *Some region names are covered by the navigator bar, I'll fix it later.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21410) A helper page that help find all problematic regions and procedures

2018-12-13 Thread Jingyun Tian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-21410:
-
Release Note: 
After HBASE-21410, we add a helper page to Master UI. This helper page is 
mainly to help hbase operator quickly found all regions and pids that are get 
stuck.
There are 2 entries to get in this page.
One is showing in the Regions in Transition section, it made "num region(s) in 
transition" a link that you can click and check all regions in transition and 
their related procedure IDs.
The other one is showing in the table details section, it made the number of 
CLOSING or OPENING regions a link, which you can click and check regions and 
related procedure IDs of CLOSING or OPENING regions of a certain table.
In this helper page, not only you can see all regions and related procedures, 
there are 2 buttons at the top which will show these regions or procedure IDs 
in text format. This is mainly aim to help operator to easily copy and paste 
all problematic procedure IDs and encoded region names to HBCK2's command line, 
by which can bypass these procedures or assign these regions.

  was:
After HBASE-21410, we add a helper page to Master UI. This helper page is 
mainly to help hbase operator quickly found all regions and pids that are get 
stuck.
There are 2 entries to get in this page.
One is showing in the Regions in Transition section, it made *num region(s) in 
transition* a link that you can check all regions in transition and their 
related procedure IDs.
The other one is showing in the table details section, it made the number of 
CLOSING or OPENING regions a link, which you can check regions and related 
procedure IDs of CLOSING or OPENING regions of a certain table.
In this helper page, not only you can see all regions and related procedures, 
there are 2 buttons at the top which will show these regions or procedure IDs 
in text format. This is mainly aim to help operator to easily copy and paste 
all problematic procedure IDs and encoded region names to HBCK2's command line, 
by which can bypass these procedures or assign these regions.


> A helper page that help find all problematic regions and procedures
> ---
>
> Key: HBASE-21410
> URL: https://issues.apache.org/jira/browse/HBASE-21410
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0, 2.1.1
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2
>
> Attachments: HBASE-21410.branch-2.1.001.patch, 
> HBASE-21410.branch-2.1.002.patch, HBASE-21410.master.001.patch, 
> HBASE-21410.master.002.patch, HBASE-21410.master.003.patch, 
> HBASE-21410.master.004.patch, Screenshot from 2018-10-30 19-06-21.png, 
> Screenshot from 2018-10-30 19-06-42.png, Screenshot from 2018-10-31 
> 10-11-38.png, Screenshot from 2018-10-31 10-11-56.png, Screenshot from 
> 2018-11-01 17-56-02.png, Screenshot from 2018-11-01 17-56-15.png
>
>
> *This page is mainly focus on finding the regions stuck in some state that 
> cannot be assigned. My proposal of the page is as follows:*
> !Screenshot from 2018-10-30 19-06-21.png!
> *From this page we can see all regions in RIT queue and their related 
> procedures. If we can determine that these regions' state are abnormal, we 
> can click the link 'Procedures as TXT' to get a full list of procedure IDs to 
> bypass them. Then click 'Regions as TXT' to get a full list of encoded region 
> names to assign.*
> !Screenshot from 2018-10-30 19-06-42.png!
> *Some region names are covered by the navigator bar, I'll fix it later.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21410) A helper page that help find all problematic regions and procedures

2018-12-13 Thread Jingyun Tian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-21410:
-
Release Note: 
After HBASE-21410, we add a helper page to Master UI. This helper page is 
mainly to help HBase operator quickly found all regions and pids that are get 
stuck.
There are 2 entries to get in this page.
One is showing in the Regions in Transition section, it made "num region(s) in 
transition" a link that you can click and check all regions in transition and 
their related procedure IDs.
The other one is showing in the table details section, it made the number of 
CLOSING or OPENING regions a link, which you can click and check regions and 
related procedure IDs of CLOSING or OPENING regions of a certain table.
In this helper page, not only you can see all regions and related procedures, 
there are 2 buttons at the top which will show these regions or procedure IDs 
in text format. This is mainly aim to help operator to easily copy and paste 
all problematic procedure IDs and encoded region names to HBCK2's command line, 
by which we HBase operator can bypass these procedures or assign these regions.

  was:
After HBASE-21410, we add a helper page to Master UI. This helper page is 
mainly to help hbase operator quickly found all regions and pids that are get 
stuck.
There are 2 entries to get in this page.
One is showing in the Regions in Transition section, it made "num region(s) in 
transition" a link that you can click and check all regions in transition and 
their related procedure IDs.
The other one is showing in the table details section, it made the number of 
CLOSING or OPENING regions a link, which you can click and check regions and 
related procedure IDs of CLOSING or OPENING regions of a certain table.
In this helper page, not only you can see all regions and related procedures, 
there are 2 buttons at the top which will show these regions or procedure IDs 
in text format. This is mainly aim to help operator to easily copy and paste 
all problematic procedure IDs and encoded region names to HBCK2's command line, 
by which can bypass these procedures or assign these regions.


> A helper page that help find all problematic regions and procedures
> ---
>
> Key: HBASE-21410
> URL: https://issues.apache.org/jira/browse/HBASE-21410
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0, 2.1.1
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2
>
> Attachments: HBASE-21410.branch-2.1.001.patch, 
> HBASE-21410.branch-2.1.002.patch, HBASE-21410.master.001.patch, 
> HBASE-21410.master.002.patch, HBASE-21410.master.003.patch, 
> HBASE-21410.master.004.patch, Screenshot from 2018-10-30 19-06-21.png, 
> Screenshot from 2018-10-30 19-06-42.png, Screenshot from 2018-10-31 
> 10-11-38.png, Screenshot from 2018-10-31 10-11-56.png, Screenshot from 
> 2018-11-01 17-56-02.png, Screenshot from 2018-11-01 17-56-15.png
>
>
> *This page is mainly focus on finding the regions stuck in some state that 
> cannot be assigned. My proposal of the page is as follows:*
> !Screenshot from 2018-10-30 19-06-21.png!
> *From this page we can see all regions in RIT queue and their related 
> procedures. If we can determine that these regions' state are abnormal, we 
> can click the link 'Procedures as TXT' to get a full list of procedure IDs to 
> bypass them. Then click 'Regions as TXT' to get a full list of encoded region 
> names to assign.*
> !Screenshot from 2018-10-30 19-06-42.png!
> *Some region names are covered by the navigator bar, I'll fix it later.*



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21505) Several inconsistencies on information reported for Replication Sources by hbase shell status 'replication' command.

2018-12-13 Thread Wellington Chevreuil (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720040#comment-16720040
 ] 

Wellington Chevreuil commented on HBASE-21505:
--

Thanks [~openinx]! Last build had some additional failures, but don't think 
those are related to any of the changes between the two latest patches, and 
those are all passing locally.

> Several inconsistencies on information reported for Replication Sources by 
> hbase shell status 'replication' command.
> 
>
> Key: HBASE-21505
> URL: https://issues.apache.org/jira/browse/HBASE-21505
> Project: HBase
>  Issue Type: Bug
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Attachments: 
> 0001-HBASE-21505-initial-version-for-more-detailed-report.patch, 
> HBASE-21505-master.001.patch, HBASE-21505-master.002.patch, 
> HBASE-21505-master.003.patch, HBASE-21505-master.004.patch, 
> HBASE-21505-master.005.patch
>
>
> While reviewing hbase shell status 'replication' command, noticed the 
> following issues related to replication source section:
> 1) TimeStampsOfLastShippedOp keeps getting updated and increasing even when 
> no new edits were added to source, so nothing was really shipped. Test steps 
> performed:
> 1.1) Source cluster with only one table targeted to replication;
> 1.2) Added a new row, confirmed the row appeared in Target cluster;
> 1.3) Issued status 'replication' command in source, TimeStampsOfLastShippedOp 
> shows current timestamp T1.
> 1.4) Waited 30 seconds, no new data added to source. Issued status 
> 'replication' command, now shows timestamp T2.
> 2) When replication is stuck due some connectivity issues or target 
> unavailability, if new edits are added in source, reported AgeOfLastShippedOp 
> is wrongly showing same value as "Replication Lag". This is incorrect, 
> AgeOfLastShippedOp should not change until there's indeed another edit 
> shipped to target. Test steps performed:
> 2.1) Source cluster with only one table targeted to replication;
> 2.2) Stopped target cluster RS;
> 2.3) Put a new row on source. Running status 'replication' command does show 
> lag increasing. TimeStampsOfLastShippedOp seems correct also, no further 
> updates as described on bullet #1 above.
> 2.4) AgeOfLastShippedOp keeps increasing together with Replication Lag, even 
> though there's no new edit shipped to target:
> {noformat}
> ...
>  SOURCE: PeerID=1, AgeOfLastShippedOp=5581, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=5581
> ...
> ...
> SOURCE: PeerID=1, AgeOfLastShippedOp=8586, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=8586
> ...
> {noformat}
> 3) AgeOfLastShippedOp gets set to 0 even when a given edit had taken some 
> time before it got finally shipped to target. Test steps performed:
> 3.1) Source cluster with only one table targeted to replication;
> 3.2) Stopped target cluster RS;
> 3.3) Put a new row on source. 
> 3.4) AgeOfLastShippedOp keeps increasing together with Replication Lag, even 
> though there's no new edit shipped to target:
> {noformat}
> T1:
> ...
>  SOURCE: PeerID=1, AgeOfLastShippedOp=5581, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=5581
> ...
> T2:
> ...
> SOURCE: PeerID=1, AgeOfLastShippedOp=8586, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=8586
> ...
> {noformat}
> 3.5) Restart target cluster RS and verified the new row appeared there. No 
> new edit added, but status 'replication' command reports AgeOfLastShippedOp 
> as 0, while it should be the diff between the time it concluded shipping at 
> target and the time it was added in source:
> {noformat}
> SOURCE: PeerID=1, AgeOfLastShippedOp=0, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Wed Nov 21 02:50:23 GMT 2018, Replication Lag=0
> {noformat}
> 4) When replication is stuck due some connectivity issues or target 
> unavailability, if RS is restarted, once recovered queue source is started, 
> TimeStampsOfLastShippedOp is set to initial java date (Thu Jan 01 01:00:00 
> GMT 1970, for example), thus "Replication Lag" also gives a complete 
> inaccurate value. 
> Tests performed:
> 4.1) Source cluster with only one table targeted to replication;
> 4.2) Stopped target cluster RS;
> 4.3) Put a new row on source, restart RS on source, waited a few seconds for 
> recovery queue source to startup, then it gives:
> {noformat}
> SOURCE: PeerID=1, AgeOfLastShippedOp=0, SizeOfLogQueue=1, 
> TimeStampsOfLastShippedOp=Thu Jan 01 01:00:00 GMT 1970, Replication 
> Lag=9223372036854775807
> {noformat}
> Also, we should report st

[jira] [Commented] (HBASE-21582) If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then SnapshotHFileCleaner will skip to run every time

2018-12-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720049#comment-16720049
 ] 

Hudson commented on HBASE-21582:


Results for branch branch-1
[build #589 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/589/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/589//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/589//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/589//JDK8_Nightly_Build_Report_(Hadoop2)/]




(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then 
> SnapshotHFileCleaner will skip to run every time
> --
>
> Key: HBASE-21582
> URL: https://issues.apache.org/jira/browse/HBASE-21582
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 2.1.2, 1.2.10, 1.4.10, 2.0.5
>
> Attachments: HBASE-21582.branch-1.v3.patch, HBASE-21582.v1.patch, 
> HBASE-21582.v2.patch, HBASE-21582.v3.patch
>
>
> This is because we remove the SnapshotSentinel  from snapshotHandlers in 
> SnapshotManager#cleanupSentinels.  Only when the following 3 case, the  
> cleanupSentinels will be called: 
> 1.  SnapshotManager#isSnapshotDone; 
> 2.  SnapshotManager#takeSnapshot; 
> 3. SnapshotManager#restoreOrCloneSnapshot
> So if no isSnapshotDone called, or no further snapshot taking, or snapshot 
> restore/clone.  the SnapshotSentinel will always be keep in snapshotHandlers. 
> But after HBASE-21387,  Only when no snapshot taking, the 
> SnapshotHFileCleaner will check the unref files and clean. 
> I found this bug, because in our XiaoMi branch-2,  we implement the soft 
> delete feature, which means if someone delete a table, then master will 
> create a snapshot firstly, after that, the table deletion begain.  the 
> implementation is quite simple, we use the snapshotManager to create a 
> snapshot. 
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> index 8f42e4a..6da6a64 100644
> --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> @@ -2385,12 +2385,6 @@ public class HMaster extends HRegionServer implements 
> MasterServices {
>protected void run() throws IOException {
>  getMaster().getMasterCoprocessorHost().preDeleteTable(tableName);
>  
> +if (snapshotBeforeDelete) {
> +  LOG.info("Take snaposhot for " + tableName + " before deleting");
> +  snapshotManager
> +  
> .takeSnapshot(SnapshotDescriptionUtils.getSnapshotNameForDeletedTable(tableName));
> +}
> +
>  LOG.info(getClientIdAuditPrefix() + " delete " + tableName);
>  
>  // TODO: We can handle/merge duplicate request
> {code}
> In the master,  I found the endless log after delete a table: 
> {code}
> org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache: Not checking 
> unreferenced files since snapshot is running, it will skip to clean the 
> HFiles this time
> {code}
> This is because the snapshotHandlers never be cleaned after call the  
> snapshotManager#takeSnapshot.  I think the asynSnapshot may has the same 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21582) If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then SnapshotHFileCleaner will skip to run every time

2018-12-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720054#comment-16720054
 ] 

Hudson commented on HBASE-21582:


Results for branch master
[build #660 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/660/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/660//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/660//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/660//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then 
> SnapshotHFileCleaner will skip to run every time
> --
>
> Key: HBASE-21582
> URL: https://issues.apache.org/jira/browse/HBASE-21582
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 2.1.2, 1.2.10, 1.4.10, 2.0.5
>
> Attachments: HBASE-21582.branch-1.v3.patch, HBASE-21582.v1.patch, 
> HBASE-21582.v2.patch, HBASE-21582.v3.patch
>
>
> This is because we remove the SnapshotSentinel  from snapshotHandlers in 
> SnapshotManager#cleanupSentinels.  Only when the following 3 case, the  
> cleanupSentinels will be called: 
> 1.  SnapshotManager#isSnapshotDone; 
> 2.  SnapshotManager#takeSnapshot; 
> 3. SnapshotManager#restoreOrCloneSnapshot
> So if no isSnapshotDone called, or no further snapshot taking, or snapshot 
> restore/clone.  the SnapshotSentinel will always be keep in snapshotHandlers. 
> But after HBASE-21387,  Only when no snapshot taking, the 
> SnapshotHFileCleaner will check the unref files and clean. 
> I found this bug, because in our XiaoMi branch-2,  we implement the soft 
> delete feature, which means if someone delete a table, then master will 
> create a snapshot firstly, after that, the table deletion begain.  the 
> implementation is quite simple, we use the snapshotManager to create a 
> snapshot. 
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> index 8f42e4a..6da6a64 100644
> --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> @@ -2385,12 +2385,6 @@ public class HMaster extends HRegionServer implements 
> MasterServices {
>protected void run() throws IOException {
>  getMaster().getMasterCoprocessorHost().preDeleteTable(tableName);
>  
> +if (snapshotBeforeDelete) {
> +  LOG.info("Take snaposhot for " + tableName + " before deleting");
> +  snapshotManager
> +  
> .takeSnapshot(SnapshotDescriptionUtils.getSnapshotNameForDeletedTable(tableName));
> +}
> +
>  LOG.info(getClientIdAuditPrefix() + " delete " + tableName);
>  
>  // TODO: We can handle/merge duplicate request
> {code}
> In the master,  I found the endless log after delete a table: 
> {code}
> org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache: Not checking 
> unreferenced files since snapshot is running, it will skip to clean the 
> HFiles this time
> {code}
> This is because the snapshotHandlers never be cleaned after call the  
> snapshotManager#takeSnapshot.  I think the asynSnapshot may has the same 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21520) TestMultiColumnScanner cost long time when using ROWCOL bloom type

2018-12-13 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-21520:
-
Attachment: HBASE-21520.v2.patch

> TestMultiColumnScanner cost long time when using ROWCOL bloom type
> --
>
> Key: HBASE-21520
> URL: https://issues.apache.org/jira/browse/HBASE-21520
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-21520.v1.patch, HBASE-21520.v2.patch, 
> TestMultiColumnScanner.png, rowcol.txt
>
>
> The TestMultiColumnScanner is easy to be timeout,  you can see HBASE-21517.   
> In my localhost,  when I set the parameters to be { 
> Compression.Algorithm.NONE, BloomType.ROW, false },  it took about 5 seconds. 
>  but if I set the parameters to be  { Compression.Algorithm.NONE, 
> BloomType.ROWCOL, false },  it would take about 45 seconds, which means 
> ROWCOL cost much more time than ROW.
> Need to find out what's wrong with this ut.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21048) Get LogLevel is not working from console in secure environment

2018-12-13 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-21048:

Attachment: HBASE-21048.master.001.patch

> Get LogLevel is not working from console in secure environment
> --
>
> Key: HBASE-21048
> URL: https://issues.apache.org/jira/browse/HBASE-21048
> Project: HBase
>  Issue Type: Bug
>Reporter: Chandra Sekhar
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HBASE-21048.001.patch, HBASE-21048.master.001.patch
>
>
> When we try to get log level of specific package in secure environment, 
> getting SocketException.
> {code:java}
> hbase/master/bin# ./hbase org.apache.hadoop.hbase.http.log.LogLevel -getlevel 
> host-:16010 org.apache.hadoop.hbase
> Connecting to http://host-:16010/logLevel?log=org.apache.hadoop.hbase
> java.net.SocketException: Unexpected end of file from server
> {code}
> It is trying to connect http instead of https 
> code snippet that handling only http in *LogLevel.java*
> {code:java}
>  public static void main(String[] args) {
> if (args.length == 3 && "-getlevel".equals(args[0])) {
>   process("http://"; + args[1] + "/logLevel?log=" + args[2]);
>   return;
> }
> else if (args.length == 4 && "-setlevel".equals(args[0])) {
>   process("http://"; + args[1] + "/logLevel?log=" + args[2]
>   + "&level=" + args[3]);
>   return;
> }
> System.err.println(USAGES);
> System.exit(-1);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21582) If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then SnapshotHFileCleaner will skip to run every time

2018-12-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720063#comment-16720063
 ] 

Hudson commented on HBASE-21582:


Results for branch branch-2.0
[build #1161 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1161/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1161//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1161//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1161//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then 
> SnapshotHFileCleaner will skip to run every time
> --
>
> Key: HBASE-21582
> URL: https://issues.apache.org/jira/browse/HBASE-21582
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 2.1.2, 1.2.10, 1.4.10, 2.0.5
>
> Attachments: HBASE-21582.branch-1.v3.patch, HBASE-21582.v1.patch, 
> HBASE-21582.v2.patch, HBASE-21582.v3.patch
>
>
> This is because we remove the SnapshotSentinel  from snapshotHandlers in 
> SnapshotManager#cleanupSentinels.  Only when the following 3 case, the  
> cleanupSentinels will be called: 
> 1.  SnapshotManager#isSnapshotDone; 
> 2.  SnapshotManager#takeSnapshot; 
> 3. SnapshotManager#restoreOrCloneSnapshot
> So if no isSnapshotDone called, or no further snapshot taking, or snapshot 
> restore/clone.  the SnapshotSentinel will always be keep in snapshotHandlers. 
> But after HBASE-21387,  Only when no snapshot taking, the 
> SnapshotHFileCleaner will check the unref files and clean. 
> I found this bug, because in our XiaoMi branch-2,  we implement the soft 
> delete feature, which means if someone delete a table, then master will 
> create a snapshot firstly, after that, the table deletion begain.  the 
> implementation is quite simple, we use the snapshotManager to create a 
> snapshot. 
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> index 8f42e4a..6da6a64 100644
> --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> @@ -2385,12 +2385,6 @@ public class HMaster extends HRegionServer implements 
> MasterServices {
>protected void run() throws IOException {
>  getMaster().getMasterCoprocessorHost().preDeleteTable(tableName);
>  
> +if (snapshotBeforeDelete) {
> +  LOG.info("Take snaposhot for " + tableName + " before deleting");
> +  snapshotManager
> +  
> .takeSnapshot(SnapshotDescriptionUtils.getSnapshotNameForDeletedTable(tableName));
> +}
> +
>  LOG.info(getClientIdAuditPrefix() + " delete " + tableName);
>  
>  // TODO: We can handle/merge duplicate request
> {code}
> In the master,  I found the endless log after delete a table: 
> {code}
> org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache: Not checking 
> unreferenced files since snapshot is running, it will skip to clean the 
> HFiles this time
> {code}
> This is because the snapshotHandlers never be cleaned after call the  
> snapshotManager#takeSnapshot.  I think the asynSnapshot may has the same 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21596) HBase Shell "delete" command can cause older versions to be shown even if VERSIONS is configured as 1

2018-12-13 Thread Wellington Chevreuil (JIRA)
Wellington Chevreuil created HBASE-21596:


 Summary: HBase Shell "delete" command can cause older versions to 
be shown even if VERSIONS is configured as 1
 Key: HBASE-21596
 URL: https://issues.apache.org/jira/browse/HBASE-21596
 Project: HBase
  Issue Type: Bug
Reporter: Wellington Chevreuil
Assignee: Wellington Chevreuil


HBase Shell delete command is supposed to operate over an specific TS. If no TS 
is informed, it will assume the latest TS for the cell and put delete marker 
for it. 

However, for a CF with VERSIONS => 1, if multiple puts were performed for same 
cell, there may be multiple cell versions on the memstore, so delete would only 
be "delete marking" one of those, and causing the most recent no marked one to 
be shown on gets/scans, which then contradicts the CF "VERSIONS => 1" 
configuration.

This issue is not seen with deleteall command or using Delete operation from 
Java API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21597) Evaluate the performance of ROWCOL bloom type when huge number of columns

2018-12-13 Thread Zheng Hu (JIRA)
Zheng Hu created HBASE-21597:


 Summary: Evaluate the performance of ROWCOL bloom type when huge 
number of columns
 Key: HBASE-21597
 URL: https://issues.apache.org/jira/browse/HBASE-21597
 Project: HBase
  Issue Type: Bug
Reporter: Zheng Hu


See HBASE-21520



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21597) Evaluate the performance of ROWCOL bloom type when huge number of columns

2018-12-13 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-21597:
-
Description: 
See HBASE-21520. 

Need a benchmark testing.

  was:See HBASE-21520


> Evaluate the performance of ROWCOL bloom type when huge number of columns
> -
>
> Key: HBASE-21597
> URL: https://issues.apache.org/jira/browse/HBASE-21597
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Priority: Major
>
> See HBASE-21520. 
> Need a benchmark testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-21596) HBase Shell "delete" command can cause older versions to be shown even if VERSIONS is configured as 1

2018-12-13 Thread Wellington Chevreuil (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-21596 started by Wellington Chevreuil.

> HBase Shell "delete" command can cause older versions to be shown even if 
> VERSIONS is configured as 1
> -
>
> Key: HBASE-21596
> URL: https://issues.apache.org/jira/browse/HBASE-21596
> Project: HBase
>  Issue Type: Bug
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
>
> HBase Shell delete command is supposed to operate over an specific TS. If no 
> TS is informed, it will assume the latest TS for the cell and put delete 
> marker for it. 
> However, for a CF with VERSIONS => 1, if multiple puts were performed for 
> same cell, there may be multiple cell versions on the memstore, so delete 
> would only be "delete marking" one of those, and causing the most recent no 
> marked one to be shown on gets/scans, which then contradicts the CF "VERSIONS 
> => 1" configuration.
> This issue is not seen with deleteall command or using Delete operation from 
> Java API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21578) Fix wrong throttling exception for capacity unit

2018-12-13 Thread Yi Mei (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Mei updated HBASE-21578:
---
Status: Patch Available  (was: Open)

> Fix wrong throttling exception for capacity unit
> 
>
> Key: HBASE-21578
> URL: https://issues.apache.org/jira/browse/HBASE-21578
> Project: HBase
>  Issue Type: Bug
>Reporter: Yi Mei
>Priority: Major
> Attachments: HBASE-21578.master.001.patch
>
>
> HBASE-21034 provides a new throttle type: capacity unit, but the throttling 
> exception is confusing: 
>  
> {noformat}
> 2018-12-11 14:38:41,503 DEBUG [Time-limited test] 
> client.RpcRetryingCallerImpl(131): Call exception, tries=6, retries=7, 
> started=0 ms ago, cancelled=false, 
> msg=org.apache.hadoop.hbase.quotas.RpcThrottlingException: write size limit 
> exceeded - wait 10sec
> at 
> org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwThrottlingException(RpcThrottlingException.java:106)
> at 
> org.apache.hadoop.hbase.quotas.RpcThrottlingException.throwWriteSizeExceeded(RpcThrottlingException.java:96)
> at 
> org.apache.hadoop.hbase.quotas.TimeBasedLimiter.checkQuota(TimeBasedLimiter.java:179){noformat}
> Need to make the exception more clearly.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21048) Get LogLevel is not working from console in secure environment

2018-12-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720086#comment-16720086
 ] 

Hadoop QA commented on HBASE-21048:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
48s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} hbase-http generated 0 new + 15 unchanged - 2 fixed 
= 15 total (was 17) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} hbase-http: The patch generated 16 new + 4 unchanged - 
4 fixed = 20 total (was 8) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
50s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 25s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hbase-http in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21048 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951657/HBASE-21048.master.001.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  shadedjars  
hadoopcheck  xml  compile  findbugs  hbaseanti  checkstyle  |
| uname | Linux 4c9f89c82ed1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / f32d261843 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15271/artifact/patchprocess/diff-checkstyle-hbase-http.txt
 |
|  Test

[jira] [Updated] (HBASE-21594) Requested block is out of range when reading hfile

2018-12-13 Thread ChenKai (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChenKai updated HBASE-21594:

Attachment: image-2018-12-13-20-11-00-818.png

> Requested block is out of range when reading hfile
> --
>
> Key: HBASE-21594
> URL: https://issues.apache.org/jira/browse/HBASE-21594
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.10
>Reporter: ChenKai
>Priority: Major
> Attachments: image-2018-12-13-20-11-00-818.png
>
>
> My HFiles are generated by Spark HBaseBulkLoad. And then when i read a few of 
> them(or hbase do compact), i encounter the following exceptions.
>  
> {code:java}
> Exception in thread "main" java.io.IOException: Requested block is out of 
> range: 77329641, lastDataBlockOffset: 77329641, 
> trailer.getLoadOnOpenDataOffset: 77329641
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:396)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:734)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.isNextBlock(HFileReaderV2.java:859)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.positionForNextBlock(HFileReaderV2.java:854)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2._next(HFileReaderV2.java:871)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:891)
> at io.patamon.hbase.test.read.TestHFileRead.main(TestHFileRead.java:49)
> {code}
> Looks like `lastDataBlockOffset` is equals to 
> `trailer.getLoadOnOpenDataOffset`. Could anyone help me? Thanks very much.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21594) Requested block is out of range when reading hfile

2018-12-13 Thread ChenKai (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720090#comment-16720090
 ] 

ChenKai commented on HBASE-21594:
-

!image-2018-12-13-20-11-00-818.png!



here shoud be this?
{code:java}
if (dataBlockOffset < 0 || dataBlockOffset > trailerOffset) {

}{code}

> Requested block is out of range when reading hfile
> --
>
> Key: HBASE-21594
> URL: https://issues.apache.org/jira/browse/HBASE-21594
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.10
>Reporter: ChenKai
>Priority: Major
> Attachments: image-2018-12-13-20-11-00-818.png
>
>
> My HFiles are generated by Spark HBaseBulkLoad. And then when i read a few of 
> them(or hbase do compact), i encounter the following exceptions.
>  
> {code:java}
> Exception in thread "main" java.io.IOException: Requested block is out of 
> range: 77329641, lastDataBlockOffset: 77329641, 
> trailer.getLoadOnOpenDataOffset: 77329641
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:396)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:734)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.isNextBlock(HFileReaderV2.java:859)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.positionForNextBlock(HFileReaderV2.java:854)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2._next(HFileReaderV2.java:871)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:891)
> at io.patamon.hbase.test.read.TestHFileRead.main(TestHFileRead.java:49)
> {code}
> Looks like `lastDataBlockOffset` is equals to 
> `trailer.getLoadOnOpenDataOffset`. Could anyone help me? Thanks very much.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21589) TestCleanupMetaWAL fails

2018-12-13 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720092#comment-16720092
 ] 

Sean Busbey commented on HBASE-21589:
-

1.8u192-ea passed 100 runs. I'm going to try updating my maven version next.

> TestCleanupMetaWAL fails
> 
>
> Key: HBASE-21589
> URL: https://issues.apache.org/jira/browse/HBASE-21589
> Project: HBase
>  Issue Type: Bug
>  Components: test, wal
>Reporter: stack
>Priority: Blocker
> Fix For: 2.1.2, 2.0.4
>
> Attachments: 
> org.apache.hadoop.hbase.regionserver.TestCleanupMetaWAL-output.txt
>
>
> This test fails near all-the-time. Sunk two RCs. Fix. Made it a blocker.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-14939) Document bulk loaded hfile replication

2018-12-13 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-14939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720091#comment-16720091
 ] 

Wei-Chiu Chuang commented on HBASE-14939:
-

Back to this -- I was blocked by HBASE-21001. Now that it is resolved I can 
continue on this one.

> Document bulk loaded hfile replication
> --
>
> Key: HBASE-14939
> URL: https://issues.apache.org/jira/browse/HBASE-14939
> Project: HBase
>  Issue Type: Task
>  Components: documentation
>Reporter: Ashish Singhi
>Assignee: Wei-Chiu Chuang
>Priority: Major
>
> After HBASE-13153 is committed we need to add that information under the 
> Cluster Replication section in HBase book.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21594) Requested block is out of range when reading hfile

2018-12-13 Thread ChenKai (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720090#comment-16720090
 ] 

ChenKai edited comment on HBASE-21594 at 12/13/18 12:15 PM:


!image-2018-12-13-20-11-00-818.png|height=100,width=280!



here shoud be this?
{code:java}
if (dataBlockOffset < 0 || dataBlockOffset > trailerOffset) {

}{code}


was (Author: 514793...@qq.com):
!image-2018-12-13-20-11-00-818.png!



here shoud be this?
{code:java}
if (dataBlockOffset < 0 || dataBlockOffset > trailerOffset) {

}{code}

> Requested block is out of range when reading hfile
> --
>
> Key: HBASE-21594
> URL: https://issues.apache.org/jira/browse/HBASE-21594
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.10
>Reporter: ChenKai
>Priority: Major
> Attachments: image-2018-12-13-20-11-00-818.png
>
>
> My HFiles are generated by Spark HBaseBulkLoad. And then when i read a few of 
> them(or hbase do compact), i encounter the following exceptions.
>  
> {code:java}
> Exception in thread "main" java.io.IOException: Requested block is out of 
> range: 77329641, lastDataBlockOffset: 77329641, 
> trailer.getLoadOnOpenDataOffset: 77329641
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:396)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:734)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.isNextBlock(HFileReaderV2.java:859)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.positionForNextBlock(HFileReaderV2.java:854)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2._next(HFileReaderV2.java:871)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:891)
> at io.patamon.hbase.test.read.TestHFileRead.main(TestHFileRead.java:49)
> {code}
> Looks like `lastDataBlockOffset` is equals to 
> `trailer.getLoadOnOpenDataOffset`. Could anyone help me? Thanks very much.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21588) Procedure v2 wal splitting implementation

2018-12-13 Thread Jingyun Tian (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-21588:
-
Attachment: HBASE-21588.master.002.patch

> Procedure v2 wal splitting implementation
> -
>
> Key: HBASE-21588
> URL: https://issues.apache.org/jira/browse/HBASE-21588
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Attachments: HBASE-21588.master.001.patch, 
> HBASE-21588.master.002.patch
>
>
> create a sub task to submit the implementation of procedure v2 wal splitting



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21588) Procedure v2 wal splitting implementation

2018-12-13 Thread Jingyun Tian (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720094#comment-16720094
 ] 

Jingyun Tian commented on HBASE-21588:
--

Let me check the QA Bot feedback again...

> Procedure v2 wal splitting implementation
> -
>
> Key: HBASE-21588
> URL: https://issues.apache.org/jira/browse/HBASE-21588
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Major
> Attachments: HBASE-21588.master.001.patch, 
> HBASE-21588.master.002.patch
>
>
> create a sub task to submit the implementation of procedure v2 wal splitting



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21594) Requested block is out of range when reading hfile

2018-12-13 Thread ChenKai (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720090#comment-16720090
 ] 

ChenKai edited comment on HBASE-21594 at 12/13/18 12:17 PM:


!image-2018-12-13-20-11-00-818.png|width=280,height=100!

here shoud be this? Because i see next block is ROOT_INDEX.
{code:java}
if (dataBlockOffset < 0 || dataBlockOffset > trailerOffset) {

}{code}


was (Author: 514793...@qq.com):
!image-2018-12-13-20-11-00-818.png|height=100,width=280!



here shoud be this?
{code:java}
if (dataBlockOffset < 0 || dataBlockOffset > trailerOffset) {

}{code}

> Requested block is out of range when reading hfile
> --
>
> Key: HBASE-21594
> URL: https://issues.apache.org/jira/browse/HBASE-21594
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 0.98.10
>Reporter: ChenKai
>Priority: Major
> Attachments: image-2018-12-13-20-11-00-818.png
>
>
> My HFiles are generated by Spark HBaseBulkLoad. And then when i read a few of 
> them(or hbase do compact), i encounter the following exceptions.
>  
> {code:java}
> Exception in thread "main" java.io.IOException: Requested block is out of 
> range: 77329641, lastDataBlockOffset: 77329641, 
> trailer.getLoadOnOpenDataOffset: 77329641
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.readBlock(HFileReaderV2.java:396)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.readNextDataBlock(HFileReaderV2.java:734)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.isNextBlock(HFileReaderV2.java:859)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.positionForNextBlock(HFileReaderV2.java:854)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2._next(HFileReaderV2.java:871)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$ScannerV2.next(HFileReaderV2.java:891)
> at io.patamon.hbase.test.read.TestHFileRead.main(TestHFileRead.java:49)
> {code}
> Looks like `lastDataBlockOffset` is equals to 
> `trailer.getLoadOnOpenDataOffset`. Could anyone help me? Thanks very much.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21582) If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then SnapshotHFileCleaner will skip to run every time

2018-12-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720100#comment-16720100
 ] 

Hudson commented on HBASE-21582:


Results for branch branch-2.1
[build #681 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/681/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/681//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/681//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/681//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then 
> SnapshotHFileCleaner will skip to run every time
> --
>
> Key: HBASE-21582
> URL: https://issues.apache.org/jira/browse/HBASE-21582
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 2.1.2, 1.2.10, 1.4.10, 2.0.5
>
> Attachments: HBASE-21582.branch-1.v3.patch, HBASE-21582.v1.patch, 
> HBASE-21582.v2.patch, HBASE-21582.v3.patch
>
>
> This is because we remove the SnapshotSentinel  from snapshotHandlers in 
> SnapshotManager#cleanupSentinels.  Only when the following 3 case, the  
> cleanupSentinels will be called: 
> 1.  SnapshotManager#isSnapshotDone; 
> 2.  SnapshotManager#takeSnapshot; 
> 3. SnapshotManager#restoreOrCloneSnapshot
> So if no isSnapshotDone called, or no further snapshot taking, or snapshot 
> restore/clone.  the SnapshotSentinel will always be keep in snapshotHandlers. 
> But after HBASE-21387,  Only when no snapshot taking, the 
> SnapshotHFileCleaner will check the unref files and clean. 
> I found this bug, because in our XiaoMi branch-2,  we implement the soft 
> delete feature, which means if someone delete a table, then master will 
> create a snapshot firstly, after that, the table deletion begain.  the 
> implementation is quite simple, we use the snapshotManager to create a 
> snapshot. 
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> index 8f42e4a..6da6a64 100644
> --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> @@ -2385,12 +2385,6 @@ public class HMaster extends HRegionServer implements 
> MasterServices {
>protected void run() throws IOException {
>  getMaster().getMasterCoprocessorHost().preDeleteTable(tableName);
>  
> +if (snapshotBeforeDelete) {
> +  LOG.info("Take snaposhot for " + tableName + " before deleting");
> +  snapshotManager
> +  
> .takeSnapshot(SnapshotDescriptionUtils.getSnapshotNameForDeletedTable(tableName));
> +}
> +
>  LOG.info(getClientIdAuditPrefix() + " delete " + tableName);
>  
>  // TODO: We can handle/merge duplicate request
> {code}
> In the master,  I found the endless log after delete a table: 
> {code}
> org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache: Not checking 
> unreferenced files since snapshot is running, it will skip to clean the 
> HFiles this time
> {code}
> This is because the snapshotHandlers never be cleaned after call the  
> snapshotManager#takeSnapshot.  I think the asynSnapshot may has the same 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21549) Add shell command for serial replication peer

2018-12-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720101#comment-16720101
 ] 

Hudson commented on HBASE-21549:


Results for branch branch-2.1
[build #681 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/681/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/681//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/681//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/681//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Add shell command for serial replication peer
> -
>
> Key: HBASE-21549
> URL: https://issues.apache.org/jira/browse/HBASE-21549
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.1.2
>
> Attachments: HBASE-21549.branch-2.001.patch, 
> HBASE-21549.master.001.patch, HBASE-21549.master.002.patch, 
> HBASE-21549.master.003.patch
>
>
> add_peer support add a serial replication peer directly.
> set_peer_serial support change a replication peer's serial flag.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21554) Show replication endpoint classname for replication peer on master web UI

2018-12-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720102#comment-16720102
 ] 

Hudson commented on HBASE-21554:


Results for branch branch-2.1
[build #681 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/681/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/681//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/681//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/681//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Show replication endpoint classname for replication peer on master web UI
> -
>
> Key: HBASE-21554
> URL: https://issues.apache.org/jira/browse/HBASE-21554
> Project: HBase
>  Issue Type: Improvement
>  Components: UI
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 3.0.0, 2.2.0, 2.1.2
>
> Attachments: HBASE-21554.branch-2.001.patch, 
> HBASE-21554.master.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21582) If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then SnapshotHFileCleaner will skip to run every time

2018-12-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720118#comment-16720118
 ] 

Hudson commented on HBASE-21582:


Results for branch branch-2
[build #1555 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1555/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1555//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1555//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/1555//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then 
> SnapshotHFileCleaner will skip to run every time
> --
>
> Key: HBASE-21582
> URL: https://issues.apache.org/jira/browse/HBASE-21582
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 2.1.2, 1.2.10, 1.4.10, 2.0.5
>
> Attachments: HBASE-21582.branch-1.v3.patch, HBASE-21582.v1.patch, 
> HBASE-21582.v2.patch, HBASE-21582.v3.patch
>
>
> This is because we remove the SnapshotSentinel  from snapshotHandlers in 
> SnapshotManager#cleanupSentinels.  Only when the following 3 case, the  
> cleanupSentinels will be called: 
> 1.  SnapshotManager#isSnapshotDone; 
> 2.  SnapshotManager#takeSnapshot; 
> 3. SnapshotManager#restoreOrCloneSnapshot
> So if no isSnapshotDone called, or no further snapshot taking, or snapshot 
> restore/clone.  the SnapshotSentinel will always be keep in snapshotHandlers. 
> But after HBASE-21387,  Only when no snapshot taking, the 
> SnapshotHFileCleaner will check the unref files and clean. 
> I found this bug, because in our XiaoMi branch-2,  we implement the soft 
> delete feature, which means if someone delete a table, then master will 
> create a snapshot firstly, after that, the table deletion begain.  the 
> implementation is quite simple, we use the snapshotManager to create a 
> snapshot. 
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> index 8f42e4a..6da6a64 100644
> --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> @@ -2385,12 +2385,6 @@ public class HMaster extends HRegionServer implements 
> MasterServices {
>protected void run() throws IOException {
>  getMaster().getMasterCoprocessorHost().preDeleteTable(tableName);
>  
> +if (snapshotBeforeDelete) {
> +  LOG.info("Take snaposhot for " + tableName + " before deleting");
> +  snapshotManager
> +  
> .takeSnapshot(SnapshotDescriptionUtils.getSnapshotNameForDeletedTable(tableName));
> +}
> +
>  LOG.info(getClientIdAuditPrefix() + " delete " + tableName);
>  
>  // TODO: We can handle/merge duplicate request
> {code}
> In the master,  I found the endless log after delete a table: 
> {code}
> org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache: Not checking 
> unreferenced files since snapshot is running, it will skip to clean the 
> HFiles this time
> {code}
> This is because the snapshotHandlers never be cleaned after call the  
> snapshotManager#takeSnapshot.  I think the asynSnapshot may has the same 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21589) TestCleanupMetaWAL fails

2018-12-13 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720176#comment-16720176
 ] 

Sean Busbey commented on HBASE-21589:
-

Maven 3.6.0 and Java 1.8u192-ea still passing. I guess it's only your logs to 
from for me. :(

> TestCleanupMetaWAL fails
> 
>
> Key: HBASE-21589
> URL: https://issues.apache.org/jira/browse/HBASE-21589
> Project: HBase
>  Issue Type: Bug
>  Components: test, wal
>Reporter: stack
>Priority: Blocker
> Fix For: 2.1.2, 2.0.4
>
> Attachments: 
> org.apache.hadoop.hbase.regionserver.TestCleanupMetaWAL-output.txt
>
>
> This test fails near all-the-time. Sunk two RCs. Fix. Made it a blocker.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-21589) TestCleanupMetaWAL fails

2018-12-13 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720176#comment-16720176
 ] 

Sean Busbey edited comment on HBASE-21589 at 12/13/18 1:54 PM:
---

Maven 3.6.0 and Java 1.8u192-ea still passing. I guess it's only your logs to 
work from for me. :(


was (Author: busbey):
Maven 3.6.0 and Java 1.8u192-ea still passing. I guess it's only your logs to 
from for me. :(

> TestCleanupMetaWAL fails
> 
>
> Key: HBASE-21589
> URL: https://issues.apache.org/jira/browse/HBASE-21589
> Project: HBase
>  Issue Type: Bug
>  Components: test, wal
>Reporter: stack
>Priority: Blocker
> Fix For: 2.1.2, 2.0.4
>
> Attachments: 
> org.apache.hadoop.hbase.regionserver.TestCleanupMetaWAL-output.txt
>
>
> This test fails near all-the-time. Sunk two RCs. Fix. Made it a blocker.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21598) HBASE_WAL_DIR if not configured, recovered.edits directory's are sidelined from the table dir path.

2018-12-13 Thread Y. SREENIVASULU REDDY (JIRA)
Y. SREENIVASULU REDDY created HBASE-21598:
-

 Summary: HBASE_WAL_DIR if not configured, recovered.edits 
directory's are sidelined from the table dir path.
 Key: HBASE-21598
 URL: https://issues.apache.org/jira/browse/HBASE-21598
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 2.1.1
Reporter: Y. SREENIVASULU REDDY
 Fix For: 2.1.2, 2.1.1


If HBASE_WAL_DIR if not configured, then 
recovered.edits dir path should be old method only.
If user is creating x no. of tables, in different namespaces, then all are 
creating in the "hbase.rootdir" path only.
{code}
//datarecovered.edits
eg:
/hbase/data/default/testTable/eaf343d35d3e66e6e5fd38106ba61c62/recovered.edits
{code}
But the format is currently. 
{code}
/recovered.edits
eg:
/hbase/default/testTable/eaf343d35d3e66e6e5fd38106ba61c62/recovered.edits
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21598) HBASE_WAL_DIR if not configured, recovered.edits directory's are sidelined from the table dir path.

2018-12-13 Thread Y. SREENIVASULU REDDY (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Y. SREENIVASULU REDDY updated HBASE-21598:
--
Component/s: Recovery

> HBASE_WAL_DIR if not configured, recovered.edits directory's are sidelined 
> from the table dir path.
> ---
>
> Key: HBASE-21598
> URL: https://issues.apache.org/jira/browse/HBASE-21598
> Project: HBase
>  Issue Type: Bug
>  Components: Recovery, wal
>Affects Versions: 2.1.1
>Reporter: Y. SREENIVASULU REDDY
>Priority: Major
> Fix For: 2.1.1, 2.1.2
>
>
> If HBASE_WAL_DIR if not configured, then 
> recovered.edits dir path should be old method only.
> If user is creating x no. of tables, in different namespaces, then all are 
> creating in the "hbase.rootdir" path only.
> {code}
> //datarecovered.edits
> eg:
> /hbase/data/default/testTable/eaf343d35d3e66e6e5fd38106ba61c62/recovered.edits
> {code}
> But the format is currently. 
> {code}
> /recovered.edits
> eg:
> /hbase/default/testTable/eaf343d35d3e66e6e5fd38106ba61c62/recovered.edits
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21512) Introduce an AsyncClusterConnection and replace the usage of ClusterConnection

2018-12-13 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720249#comment-16720249
 ] 

Hudson commented on HBASE-21512:


Results for branch HBASE-21512
[build #16 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/16/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/16//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/16//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21512/16//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Introduce an AsyncClusterConnection and replace the usage of ClusterConnection
> --
>
> Key: HBASE-21512
> URL: https://issues.apache.org/jira/browse/HBASE-21512
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Duo Zhang
>Priority: Major
> Fix For: 3.0.0
>
>
> At least for the RSProcedureDispatcher, with CompletableFuture we do not need 
> to set a delay and use a thread pool any more, which could reduce the 
> resource usage and also the latency.
> Once this is done, I think we can remove the ClusterConnection completely, 
> and start to rewrite the old sync client based on the async client, which 
> could reduce the code base a lot for our client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21588) Procedure v2 wal splitting implementation

2018-12-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720290#comment-16720290
 ] 

Hadoop QA commented on HBASE-21588:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
48s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green}  3m 
40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  1m 49s{color} 
| {color:red} hbase-server generated 1 new + 187 unchanged - 1 fixed = 188 
total (was 188) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
12s{color} | {color:red} hbase-server: The patch generated 53 new + 298 
unchanged - 0 fixed = 351 total (was 298) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 3 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
48s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 25s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green}  
1m 40s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
10s{color} | {color:red} hbase-server generated 5 new + 0 unchanged - 0 fixed = 
5 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
33s{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
41s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
22s{color} | {color:green} hbase-procedure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}121m 
46s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {colo

[jira] [Updated] (HBASE-20193) Basic Replication Web UI - Regionserver

2018-12-13 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20193:

Fix Version/s: (was: 2.2.0)

> Basic Replication Web UI - Regionserver 
> 
>
> Key: HBASE-20193
> URL: https://issues.apache.org/jira/browse/HBASE-20193
> Project: HBase
>  Issue Type: Sub-task
>  Components: Replication, Usability
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
>Priority: Critical
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20193.master.001.patch, 
> HBASE-20193.master.002.patch, HBASE-20193.master.003.patch, 
> HBASE-20193.master.004.patch, HBASE-20193.master.004.patch, 
> HBASE-20193.master.005.patch, HBASE-20193.master.006.patch, 
> HBASE-20193.master.006.patch, HBASE-20193.master.007.patch, 
> HBASE-20193.master.008.patch, HBASE-20193.master.009.patch, 
> HBASE-20193.master.010.patch, HBASE-20193.master.011.patch, 
> HBASE-20193.master.012.patch, HBASE-20193.master.013.patch, 
> HBASE-20193.master.014.patch, replication_rs_1.jpg, replication_rs_2.jpg
>
>
> subtask of HBASE-15809. Implementation of replication UI on Regionserver web 
> page.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20244) NoSuchMethodException when retrieving private method decryptEncryptedDataEncryptionKey from DFSClient

2018-12-13 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20244:

Fix Version/s: (was: 2.2.0)

> NoSuchMethodException when retrieving private method 
> decryptEncryptedDataEncryptionKey from DFSClient
> -
>
> Key: HBASE-20244
> URL: https://issues.apache.org/jira/browse/HBASE-20244
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0, 2.0.1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: 20244.v1.txt, 20244.v1.txt, 20244.v1.txt, 
> HBASE-20244-v1.patch, HBASE-20244.patch
>
>
> I was running unit test against hadoop 3.0.1 RC and saw the following in test 
> output:
> {code}
> ERROR [RS-EventLoopGroup-3-3] 
> asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper(267): Couldn't properly 
> initialize access to HDFS internals. Please update  your WAL Provider to not 
> make use of the 'asyncfs' provider. See HBASE-16110 for more information.
> java.lang.NoSuchMethodException: 
> org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(org.apache.hadoop.fs.FileEncryptionInfo)
>   at java.lang.Class.getDeclaredMethod(Class.java:2130)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.createTransparentCryptoHelper(FanOutOneBlockAsyncDFSOutputSaslHelper.java:232)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputSaslHelper.(FanOutOneBlockAsyncDFSOutputSaslHelper.java:262)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.initialize(FanOutOneBlockAsyncDFSOutputHelper.java:661)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper.access$300(FanOutOneBlockAsyncDFSOutputHelper.java:118)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:720)
>   at 
> org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutputHelper$13.operationComplete(FanOutOneBlockAsyncDFSOutputHelper.java:715)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:507)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:500)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:479)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:420)
>   at 
> org.apache.hbase.thirdparty.io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:104)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.DefaultChannelPromise.trySuccess(DefaultChannelPromise.java:82)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.fulfillConnectPromise(AbstractNioChannel.java:306)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:341)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:633)
>   at 
> org.apache.hbase.thirdparty.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:580)
> {code}
> The private method was moved by HDFS-12574 to HdfsKMSUtil with different 
> signature.
> To accommodate the above method movement, it seems we need to call the 
> following method of DFSClient :
> {code}
>   public KeyProvider getKeyProvider() throws IOException {
> {code}
> Since the new decryptEncryptedDataEncryptionKey method has this signature:
> {code}
>   static KeyVersion decryptEncryptedDataEncryptionKey(FileEncryptionInfo
> feInfo, KeyProvider keyProvider) throws IOException {
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20839) Fallback to FSHLog if we can not instantiated AsyncFSWAL when user does not specify AsyncFSWAL explicitly

2018-12-13 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-20839:
---

Assignee: Duo Zhang  (was: Sean Busbey)

> Fallback to FSHLog if we can not instantiated AsyncFSWAL when user does not 
> specify AsyncFSWAL explicitly
> -
>
> Key: HBASE-20839
> URL: https://issues.apache.org/jira/browse/HBASE-20839
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: HBASE-20839.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20839) Fallback to FSHLog if we can not instantiated AsyncFSWAL when user does not specify AsyncFSWAL explicitly

2018-12-13 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-20839:
---

Assignee: Sean Busbey  (was: Duo Zhang)

> Fallback to FSHLog if we can not instantiated AsyncFSWAL when user does not 
> specify AsyncFSWAL explicitly
> -
>
> Key: HBASE-20839
> URL: https://issues.apache.org/jira/browse/HBASE-20839
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: HBASE-20839.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20839) Fallback to FSHLog if we can not instantiated AsyncFSWAL when user does not specify AsyncFSWAL explicitly

2018-12-13 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-20839:

Fix Version/s: (was: 2.2.0)

> Fallback to FSHLog if we can not instantiated AsyncFSWAL when user does not 
> specify AsyncFSWAL explicitly
> -
>
> Key: HBASE-20839
> URL: https://issues.apache.org/jira/browse/HBASE-20839
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 3.0.0, 2.1.0, 2.0.2
>
> Attachments: HBASE-20839.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21578) Fix wrong throttling exception for capacity unit

2018-12-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720393#comment-16720393
 ] 

Hadoop QA commented on HBASE-21578:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
28s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
44s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
38s{color} | {color:red} hbase-client: The patch generated 3 new + 0 unchanged 
- 0 fixed = 3 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
45s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 30s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
35s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}243m 38s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
44s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}300m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestFromClientSide |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21578 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951324/HBASE-21578.master.001.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 3485cc2425fe 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Perso

[jira] [Updated] (HBASE-21582) If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then SnapshotHFileCleaner will skip to run every time

2018-12-13 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21582:
---
Fix Version/s: 1.3.3

> If call HBaseAdmin#snapshotAsync but forget call isSnapshotFinished, then 
> SnapshotHFileCleaner will skip to run every time
> --
>
> Key: HBASE-21582
> URL: https://issues.apache.org/jira/browse/HBASE-21582
> Project: HBase
>  Issue Type: Bug
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 1.3.3, 2.2.0, 2.1.2, 1.2.10, 1.4.10, 2.0.5
>
> Attachments: HBASE-21582.branch-1.v3.patch, HBASE-21582.v1.patch, 
> HBASE-21582.v2.patch, HBASE-21582.v3.patch
>
>
> This is because we remove the SnapshotSentinel  from snapshotHandlers in 
> SnapshotManager#cleanupSentinels.  Only when the following 3 case, the  
> cleanupSentinels will be called: 
> 1.  SnapshotManager#isSnapshotDone; 
> 2.  SnapshotManager#takeSnapshot; 
> 3. SnapshotManager#restoreOrCloneSnapshot
> So if no isSnapshotDone called, or no further snapshot taking, or snapshot 
> restore/clone.  the SnapshotSentinel will always be keep in snapshotHandlers. 
> But after HBASE-21387,  Only when no snapshot taking, the 
> SnapshotHFileCleaner will check the unref files and clean. 
> I found this bug, because in our XiaoMi branch-2,  we implement the soft 
> delete feature, which means if someone delete a table, then master will 
> create a snapshot firstly, after that, the table deletion begain.  the 
> implementation is quite simple, we use the snapshotManager to create a 
> snapshot. 
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> index 8f42e4a..6da6a64 100644
> --- a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> +++ b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
> @@ -2385,12 +2385,6 @@ public class HMaster extends HRegionServer implements 
> MasterServices {
>protected void run() throws IOException {
>  getMaster().getMasterCoprocessorHost().preDeleteTable(tableName);
>  
> +if (snapshotBeforeDelete) {
> +  LOG.info("Take snaposhot for " + tableName + " before deleting");
> +  snapshotManager
> +  
> .takeSnapshot(SnapshotDescriptionUtils.getSnapshotNameForDeletedTable(tableName));
> +}
> +
>  LOG.info(getClientIdAuditPrefix() + " delete " + tableName);
>  
>  // TODO: We can handle/merge duplicate request
> {code}
> In the master,  I found the endless log after delete a table: 
> {code}
> org.apache.hadoop.hbase.master.snapshot.SnapshotFileCache: Not checking 
> unreferenced files since snapshot is running, it will skip to clean the 
> HFiles this time
> {code}
> This is because the snapshotHandlers never be cleaned after call the  
> snapshotManager#takeSnapshot.  I think the asynSnapshot may has the same 
> problem. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21098) Improve Snapshot Performance with Temporary Snapshot Directory when rootDir on S3

2018-12-13 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21098:
---
Fix Version/s: (was: 1.3.3)

> Improve Snapshot Performance with Temporary Snapshot Directory when rootDir 
> on S3
> -
>
> Key: HBASE-21098
> URL: https://issues.apache.org/jira/browse/HBASE-21098
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 1.4.8, 2.1.1
>Reporter: Tyler Mi
>Assignee: Tyler Mi
>Priority: Major
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.9
>
> Attachments: HBASE-21098.branch-1.001.patch, 
> HBASE-21098.branch-1.002.patch, HBASE-21098.master.001.patch, 
> HBASE-21098.master.002.patch, HBASE-21098.master.003.patch, 
> HBASE-21098.master.004.patch, HBASE-21098.master.005.patch, 
> HBASE-21098.master.006.patch, HBASE-21098.master.007.patch, 
> HBASE-21098.master.008.patch, HBASE-21098.master.009.patch, 
> HBASE-21098.master.010.patch, HBASE-21098.master.011.patch, 
> HBASE-21098.master.012.patch, HBASE-21098.master.013.patch
>
>
> When using Apache HBase, the snapshot feature can be used to make a point in 
> time recovery. To do this, HBase creates a manifest of all the files in all 
> of the Regions so that those files can be referenced again when a user 
> restores a snapshot. With HBase's S3 storage mode, developers can store their 
> data off-cluster on Amazon S3. However, utilizing S3 as a file system is 
> inefficient in some operations, namely renames. Most Hadoop ecosystem 
> applications use an atomic rename as a method of committing data. However, 
> with S3, a rename is a separate copy and then a delete of every file which is 
> no longer atomic and, in fact, quite costly. In addition, puts and deletes on 
> S3 have latency issues that traditional filesystems do not encounter when 
> manipulating the region snapshots to consolidate into a single manifest. When 
> HBase on S3 users have a significant amount of regions, puts, deletes, and 
> renames (the final commit stage of the snapshot) become the bottleneck 
> causing snapshots to take many minutes or even hours to complete.
> The purpose of this patch is to increase the overall performance of snapshots 
> while utilizing HBase on S3 through the use of a temporary directory for the 
> snapshots that exists on a traditional filesystem like HDFS to circumvent the 
> bottlenecks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21275) Thrift Server (branch 1 fix) -> Disable TRACE HTTP method for thrift http server (branch 1 only)

2018-12-13 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21275:
---
Fix Version/s: 1.3.3

> Thrift Server (branch 1 fix) -> Disable TRACE HTTP method for thrift http 
> server (branch 1 only)
> 
>
> Key: HBASE-21275
> URL: https://issues.apache.org/jira/browse/HBASE-21275
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Fix For: 1.5.0, 1.3.3, 1.4.9
>
> Attachments: HBASE-21275-branch-1.001.patch, 
> HBASE-21275-branch-1.2.001.patch, HBASE-21275-branch-1.2.002.patch, 
> HBASE-21275-branch-1.2.003.patch, HBASE-21275-branch-1.2.003.patch, 
> HBASE-21275-branch-1.4.001.patch
>
>
> There's been a reasonable number of users running thrift http server on hbase 
> 1.x suffering with security audit tests pointing thrift server allows TRACE 
> requests.
> After doing some search, I can see HBASE-20406 added restrictions for 
> TRACE/OPTIONS method when Thrift is running over http, but it relies on many 
> other commits applied to thrift http server. This patch was later reverted 
> from master. Then again later, HBASE-20004 had made TRACE/OPTIONS 
> configurable via "*hbase.thrift.http.allow.options.method*" property, with 
> both methods being disabled by default. This also seems to rely on many 
> changes applied to thrift http server, and a branch 1 compatible patch does 
> not seem feasible.
> A solution for branch 1 is pretty simple though, am proposing a patch that 
> simply uses *WebAppContext*, instead of *Context*, as the context for the 
> *HttpServer* instance. *WebAppContext* will already restrict TRACE methods by 
> default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-15529) Override needBalance in StochasticLoadBalancer

2018-12-13 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-15529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-15529:
---
Fix Version/s: 1.3.3

> Override needBalance in StochasticLoadBalancer
> --
>
> Key: HBASE-15529
> URL: https://issues.apache.org/jira/browse/HBASE-15529
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 1.4.0, 1.3.3, 2.0.0
>
> Attachments: 15529-v1.patch, HBASE-15529-v1.patch, 
> HBASE-15529-v2.patch, HBASE-15529-v3.patch, HBASE-15529.patch
>
>
> StochasticLoadBalancer includes cost functions to compute the cost of region 
> rount, r/w qps, table load, region locality, memstore size, and storefile 
> size. Every cost function returns a number between 0 and 1 inclusive and the 
> computed costs are scaled by their respective multipliers. The bigger 
> multiplier means that the respective cost function have the bigger weight. 
> But needBalance decide whether to balance only by region count and doesn't 
> consider r/w qps, locality even you config these cost function with bigger 
> multiplier. StochasticLoadBalancer should override needBalance and decide 
> whether to balance by it's configs of cost functions.
> Add one new config hbase.master.balancer.stochastic.minCostNeedBalance, 
> cluster need balance when (total cost / sum multiplier) > minCostNeedBalance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-17565) StochasticLoadBalancer may incorrectly skip balancing due to skewed multiplier sum

2018-12-13 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-17565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-17565:
---
Fix Version/s: 1.3.3

> StochasticLoadBalancer may incorrectly skip balancing due to skewed 
> multiplier sum
> --
>
> Key: HBASE-17565
> URL: https://issues.apache.org/jira/browse/HBASE-17565
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 1.4.0, 1.3.3, 2.0.0
>
> Attachments: 17565.addendum, 17565.v1.txt, 17565.v2.txt, 
> 17565.v3.txt, 17565.v4.txt, 17565.v5.txt, 17565.v6.txt
>
>
> I was investigating why a 6 node cluster kept skipping balancing requests.
> Here were the region counts on the servers:
> 449, 448, 447, 449, 453, 0
> {code}
> 2017-01-26 22:04:47,145 INFO  
> [RpcServer.deafult.FPBQ.Fifo.handler=1,queue=0,port=16000] 
> balancer.StochasticLoadBalancer: Skipping load balancing because balanced 
> cluster; total cost is 127.0171157050385, sum multiplier is 111087.0 min cost 
> which need balance is 0.05
> {code}
> The big multiplier sum caught my eyes. Here was what additional debug logging 
> showed:
> {code}
> 2017-01-27 23:25:31,749 DEBUG 
> [RpcServer.deafult.FPBQ.Fifo.handler=9,queue=0,port=16000] 
> balancer.StochasticLoadBalancer: class 
> org.apache.hadoop.hbase.master.balancer.  
> StochasticLoadBalancer$RegionReplicaHostCostFunction with multiplier 10.0
> 2017-01-27 23:25:31,749 DEBUG 
> [RpcServer.deafult.FPBQ.Fifo.handler=9,queue=0,port=16000] 
> balancer.StochasticLoadBalancer: class 
> org.apache.hadoop.hbase.master.balancer.  
> StochasticLoadBalancer$RegionReplicaRackCostFunction with multiplier 1.0
> {code}
> Note however, that no table in the cluster used read replica.
> I can think of two ways of fixing this situation:
> 1. If there is no read replica in the cluster, ignore the multipliers for the 
> above two functions.
> 2. When cost() returned by the CostFunction is 0 (or very very close to 0.0), 
> ignore the multiplier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21546) ConnectException in TestThriftHttpServer

2018-12-13 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-21546:
---
Fix Version/s: 1.3.3

> ConnectException in TestThriftHttpServer
> 
>
> Key: HBASE-21546
> URL: https://issues.apache.org/jira/browse/HBASE-21546
> Project: HBase
>  Issue Type: Bug
>  Components: test, Thrift
>Affects Versions: 1.5.0, 1.4.9
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Major
> Fix For: 1.5.0, 1.3.3, 1.4.9
>
> Attachments: HBASE-21546.branch-1.01.patch
>
>
> TestThriftHttpServer is the first on the flaky list for branch-1 and 
> branch-1.4 with approximately 60% failure rate.
> Thrift server is not yet accepting request at the time the test starts. 
> java.net.ConnectException: Connection refused (Connection refused) at 
> org.apache.hadoop.hbase.thrift.TestThriftHttpServer.checkHttpMethods(TestThriftHttpServer.java:275)
>  at 
> org.apache.hadoop.hbase.thrift.TestThriftHttpServer.testThriftServerHttpOptionsForbiddenWhenOptionsDisabled(TestThriftHttpServer.java:176)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21590) Optimize trySkipToNextColumn in StoreScanner a bit

2018-12-13 Thread Lars Hofhansl (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720539#comment-16720539
 ] 

Lars Hofhansl commented on HBASE-21590:
---

[~stack] In fact this triggers in many/most cases. Normally, most of the time 
the top scanner on the Heap won't change (assuming data is compacted and/or 
we're not updating individual KVs all the time - so that they end up in 
different files). And if the next indexed key changes we need to reseek anyway.

[~zghaobac] NP at all. Thanks for looking and thanks for identifying the 
problem in my initial solution in the first place! (Now at least we get some of 
the performance back.)


> Optimize trySkipToNextColumn in StoreScanner a bit
> --
>
> Key: HBASE-21590
> URL: https://issues.apache.org/jira/browse/HBASE-21590
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Attachments: 21590-1.5.txt, HBASE-21590-master.txt
>
>
> See latest comment on HBASE-17958



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21590) Optimize trySkipToNextColumn in StoreScanner a bit

2018-12-13 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-21590:
--
   Resolution: Fixed
Fix Version/s: 2.2.0
   1.5.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Committed to branch-1, branch-2, and master.

The perf "regression" this fixes was introduced in 1.4.0.
[~apurtell], you want this in branch-1.4 as well?


> Optimize trySkipToNextColumn in StoreScanner a bit
> --
>
> Key: HBASE-21590
> URL: https://issues.apache.org/jira/browse/HBASE-21590
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 2.2.0
>
> Attachments: 21590-1.5.txt, HBASE-21590-master.txt
>
>
> See latest comment on HBASE-17958



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21590) Optimize trySkipToNextColumn in StoreScanner a bit

2018-12-13 Thread Lars Hofhansl (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720553#comment-16720553
 ] 

Lars Hofhansl commented on HBASE-21590:
---

And what about branch-2.0 and branch-2.1.

> Optimize trySkipToNextColumn in StoreScanner a bit
> --
>
> Key: HBASE-21590
> URL: https://issues.apache.org/jira/browse/HBASE-21590
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 2.2.0
>
> Attachments: 21590-1.5.txt, HBASE-21590-master.txt
>
>
> See latest comment on HBASE-17958



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21545) NEW_VERSION_BEHAVIOR breaks Get/Scan with specified columns

2018-12-13 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720589#comment-16720589
 ] 

Sakthi commented on HBASE-21545:


Yes, I think we need to open new jiras. The new version of the failing UTs can 
be handled as a separate task for sure.

> NEW_VERSION_BEHAVIOR breaks Get/Scan with specified columns
> ---
>
> Key: HBASE-21545
> URL: https://issues.apache.org/jira/browse/HBASE-21545
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.0.0, 2.1.1
> Environment: HBase 2.1.1
> Hadoop 2.8.4
> Java 8
>Reporter: Andrey Elenskiy
>Assignee: Andrey Elenskiy
>Priority: Major
> Attachments: App.java, HBASE-21545.branch-2.1.0001.patch, 
> HBASE-21545.branch-2.1.0002.patch, HBASE-21545.branch-2.1.0003.patch, 
> HBASE-21545.branch-2.1.0004.patch, HBASE-21545.branch-2.1.0005.patch
>
>
> Setting NEW_VERSION_BEHAVIOR => 'true' on a column family causes only one 
> column to be returned when columns are specified in Scan or Get query. The 
> result is always one first column by sorted order. I've attached a code 
> snipped to reproduce the issue that can be converted into a test.
> I've also validated with hbase shell and gohbase client, so it's gotta be 
> server side issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21590) Optimize trySkipToNextColumn in StoreScanner a bit

2018-12-13 Thread Lars Hofhansl (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720601#comment-16720601
 ] 

Lars Hofhansl commented on HBASE-21590:
---

I'll also look for other places where this applicable - but it does require 
unchanging Cells, so this is probably a unique place.

> Optimize trySkipToNextColumn in StoreScanner a bit
> --
>
> Key: HBASE-21590
> URL: https://issues.apache.org/jira/browse/HBASE-21590
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 2.2.0
>
> Attachments: 21590-1.5.txt, HBASE-21590-master.txt
>
>
> See latest comment on HBASE-17958



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-21575) memstore above high watermark message is logged too much

2018-12-13 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HBASE-21575.
--
Resolution: Fixed

Fixed the commit. Thanks for noticing...

> memstore above high watermark message is logged too much
> 
>
> Key: HBASE-21575
> URL: https://issues.apache.org/jira/browse/HBASE-21575
> Project: HBase
>  Issue Type: Bug
>  Components: logging, regionserver
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HBASE-21575.01.patch, HBASE-21575.patch
>
>
> 100s of Mb of logs like this, in a tight loop:
> {noformat}
> 2018-12-08 10:27:00,462 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3646ms
> 2018-12-08 10:27:00,463 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3647ms
> 2018-12-08 10:27:00,463 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3647ms
> 2018-12-08 10:27:00,464 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3648ms
> 2018-12-08 10:27:00,464 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3648ms
> 2018-12-08 10:27:00,465 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3649ms
> 2018-12-08 10:27:00,465 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3649ms
> 2018-12-08 10:27:00,466 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3650ms
> 2018-12-08 10:27:00,466 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3650ms
> 2018-12-08 10:27:00,467 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3651ms
> 2018-12-08 10:27:00,469 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3653ms
> 2018-12-08 10:27:00,470 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3654ms
> 2018-12-08 10:27:00,470 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3654ms
> 2018-12-08 10:27:00,471 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3655ms
> 2018-12-08 10:27:00,471 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3655ms
> 2018-12-08 10:27:00,472 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3656ms
> 2018-12-08 10:27:00,472 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3656ms
> 2018-12-08 10:27:00,473 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3657ms
> 2018-12-08 10:27:00,474 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3658ms
> 2018-12-08 10:27:00,475 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3659ms
> 2018-12-08 10:27:00,476 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3660ms
> 2018-12-08 10:27:00,476 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above high water mark and block 
> 3660ms
> 2018-12-08 10:27:00,477 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=2,queue=2,port=17020] 
> regionserver.MemStoreFlusher: Memstore is above h

[jira] [Commented] (HBASE-21577) do not close regions when RS is dying due to a broken WAL

2018-12-13 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720597#comment-16720597
 ] 

Sergey Shelukhin commented on HBASE-21577:
--

[~Apache9] does this patch look good after the change?

> do not close regions when RS is dying due to a broken WAL
> -
>
> Key: HBASE-21577
> URL: https://issues.apache.org/jira/browse/HBASE-21577
> Project: HBase
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>Assignee: Sergey Shelukhin
>Priority: Major
> Attachments: HBASE-21577.master.001.patch, 
> HBASE-21577.master.002.patch
>
>
> See HBASE-21576. DroppedSnapshot can be an FS failure; also, when WAL is 
> broken, some regions whose flushes are already in flight keep retrying, 
> resulting in minutes-long shutdown times. Since WAL will be replayed anyway 
> flushing regions doesn't provide much benefit.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21590) Optimize trySkipToNextColumn in StoreScanner a bit

2018-12-13 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720605#comment-16720605
 ] 

Andrew Purtell commented on HBASE-21590:


Yes it needs to be in 1.4 no question, thanks.

> Optimize trySkipToNextColumn in StoreScanner a bit
> --
>
> Key: HBASE-21590
> URL: https://issues.apache.org/jira/browse/HBASE-21590
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 2.2.0
>
> Attachments: 21590-1.5.txt, HBASE-21590-master.txt
>
>
> See latest comment on HBASE-17958



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21048) Get LogLevel is not working from console in secure environment

2018-12-13 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-21048:

Attachment: HBASE-21048.master.002.patch

> Get LogLevel is not working from console in secure environment
> --
>
> Key: HBASE-21048
> URL: https://issues.apache.org/jira/browse/HBASE-21048
> Project: HBase
>  Issue Type: Bug
>Reporter: Chandra Sekhar
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HBASE-21048.001.patch, HBASE-21048.master.001.patch, 
> HBASE-21048.master.002.patch
>
>
> When we try to get log level of specific package in secure environment, 
> getting SocketException.
> {code:java}
> hbase/master/bin# ./hbase org.apache.hadoop.hbase.http.log.LogLevel -getlevel 
> host-:16010 org.apache.hadoop.hbase
> Connecting to http://host-:16010/logLevel?log=org.apache.hadoop.hbase
> java.net.SocketException: Unexpected end of file from server
> {code}
> It is trying to connect http instead of https 
> code snippet that handling only http in *LogLevel.java*
> {code:java}
>  public static void main(String[] args) {
> if (args.length == 3 && "-getlevel".equals(args[0])) {
>   process("http://"; + args[1] + "/logLevel?log=" + args[2]);
>   return;
> }
> else if (args.length == 4 && "-setlevel".equals(args[0])) {
>   process("http://"; + args[1] + "/logLevel?log=" + args[2]
>   + "&level=" + args[3]);
>   return;
> }
> System.err.println(USAGES);
> System.exit(-1);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21599) Fix findbugs and javadoc warnings from HBASE-21246

2018-12-13 Thread Josh Elser (JIRA)
Josh Elser created HBASE-21599:
--

 Summary: Fix findbugs and javadoc warnings from HBASE-21246
 Key: HBASE-21599
 URL: https://issues.apache.org/jira/browse/HBASE-21599
 Project: HBase
  Issue Type: Bug
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: HBASE-20952


{noformat}
[WARNING] 
/testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java:125:
 warning - Tag @link: can't find preLogRoll(Path) in 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager
[WARNING] 
/testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java:125:
 warning - Tag @link: can't find preLogRoll(Path) in 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager
{noformat}
and
{noformat}
org.apache.hadoop.hbase.wal.DisabledWALProvider$1.equals(Object) always returns 
true{noformat}
Pretty trivial stuff to clean up now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-21600) Investigate non-null check on FSWALIdentity

2018-12-13 Thread Josh Elser (JIRA)
Josh Elser created HBASE-21600:
--

 Summary: Investigate non-null check on FSWALIdentity
 Key: HBASE-21600
 URL: https://issues.apache.org/jira/browse/HBASE-21600
 Project: HBase
  Issue Type: Task
Reporter: Josh Elser
 Fix For: HBASE-20952


{code:java}
public FSWALIdentity(Path path) 

public FSWALIdentity(String name)
{code}

bq. Can we add a pre-null check or annotation NotNullable or javadoc to raise 
attention of no-null? Passing a null object to WALIdentity makes no sense to me.

Reid had the above suggestion on HBASE-21246. Should make a check throughout 
the code and make sure nothing else breaks if we start asserting that the 
path/name is always non-null (I fear something might :P).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21246) Introduce WALIdentity interface

2018-12-13 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720644#comment-16720644
 ] 

Josh Elser commented on HBASE-21246:


Filed HBASE-21599 and HBASE-21600 to circle back around on the two (IMO, minor) 
cleanups to help this move forward.

Thanks much for the reviews, Reid, and thank you Ankit for picking up this 
patch. Committing it to the feature branch now.

> Introduce WALIdentity interface
> ---
>
> Key: HBASE-21246
> URL: https://issues.apache.org/jira/browse/HBASE-21246
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Major
> Fix For: HBASE-20952
>
> Attachments: 21246.003.patch, 21246.20.txt, 21246.21.txt, 
> 21246.23.txt, 21246.24.txt, 21246.25.txt, 21246.26.txt, 21246.34.txt, 
> 21246.37.txt, 21246.39.txt, 21246.41.txt, 21246.43.txt, 
> 21246.HBASE-20952.001.patch, 21246.HBASE-20952.002.patch, 
> 21246.HBASE-20952.004.patch, 21246.HBASE-20952.005.patch, 
> 21246.HBASE-20952.007.patch, 21246.HBASE-20952.008.patch, 
> HBASE-21246.HBASE-20952.003.patch, HBASE-21246.master.001.patch, 
> HBASE-21246.master.002.patch, replication-src-creates-wal-reader.jpg, 
> wal-factory-providers.png, wal-providers.png, wal-splitter-reader.jpg, 
> wal-splitter-writer.jpg
>
>
> We are introducing WALIdentity interface so that the WAL representation can 
> be decoupled from distributed filesystem.
> The interface provides getName method whose return value can represent 
> filename in distributed filesystem environment or, the name of the stream 
> when the WAL is backed by log stream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21590) Optimize trySkipToNextColumn in StoreScanner a bit

2018-12-13 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-21590:
--
Fix Version/s: 1.4.10

> Optimize trySkipToNextColumn in StoreScanner a bit
> --
>
> Key: HBASE-21590
> URL: https://issues.apache.org/jira/browse/HBASE-21590
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10
>
> Attachments: 21590-1.5.txt, HBASE-21590-master.txt
>
>
> See latest comment on HBASE-17958



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21590) Optimize trySkipToNextColumn in StoreScanner a bit

2018-12-13 Thread Lars Hofhansl (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720647#comment-16720647
 ] 

Lars Hofhansl commented on HBASE-21590:
---

Committed to branch-1.4 as well.

> Optimize trySkipToNextColumn in StoreScanner a bit
> --
>
> Key: HBASE-21590
> URL: https://issues.apache.org/jira/browse/HBASE-21590
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, Scanners
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Critical
> Fix For: 3.0.0, 1.5.0, 2.2.0, 1.4.10
>
> Attachments: 21590-1.5.txt, HBASE-21590-master.txt
>
>
> See latest comment on HBASE-17958



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21246) Introduce WALIdentity interface

2018-12-13 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21246:
---
  Resolution: Fixed
Assignee: Ankit Singhal  (was: Ted Yu)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Introduce WALIdentity interface
> ---
>
> Key: HBASE-21246
> URL: https://issues.apache.org/jira/browse/HBASE-21246
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Ted Yu
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: HBASE-20952
>
> Attachments: 21246.003.patch, 21246.20.txt, 21246.21.txt, 
> 21246.23.txt, 21246.24.txt, 21246.25.txt, 21246.26.txt, 21246.34.txt, 
> 21246.37.txt, 21246.39.txt, 21246.41.txt, 21246.43.txt, 
> 21246.HBASE-20952.001.patch, 21246.HBASE-20952.002.patch, 
> 21246.HBASE-20952.004.patch, 21246.HBASE-20952.005.patch, 
> 21246.HBASE-20952.007.patch, 21246.HBASE-20952.008.patch, 
> HBASE-21246.HBASE-20952.003.patch, HBASE-21246.master.001.patch, 
> HBASE-21246.master.002.patch, replication-src-creates-wal-reader.jpg, 
> wal-factory-providers.png, wal-providers.png, wal-splitter-reader.jpg, 
> wal-splitter-writer.jpg
>
>
> We are introducing WALIdentity interface so that the WAL representation can 
> be decoupled from distributed filesystem.
> The interface provides getName method whose return value can represent 
> filename in distributed filesystem environment or, the name of the stream 
> when the WAL is backed by log stream.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21020) Determine WAL API changes for replication

2018-12-13 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated HBASE-21020:
--
Fix Version/s: HBASE-20952
   Status: Patch Available  (was: Open)

> Determine WAL API changes for replication
> -
>
> Key: HBASE-21020
> URL: https://issues.apache.org/jira/browse/HBASE-21020
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: HBASE-20952
>
> Attachments: HBASE-21020.HBASE-20952.001.patch
>
>
> Spin-off of HBASE-20952.
> Ankit has started working on what he thinks a WAL API specifically for 
> Replication should look like. In his own words:
> {quote}
> At a high level, it looks,
>  * Need to abstract WAL name under WalInfo instead of Paths
>  * Abstract the WalEntryStream for FileSystem and Streaming system.
>  * Build WalStorage APIs to abstract operation on Wal.
>  * Provide the implementation of all above through corresponding WalProvider
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21020) Determine WAL API changes for replication

2018-12-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720657#comment-16720657
 ] 

Hadoop QA commented on HBASE-21020:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HBASE-21020 does not apply to HBASE-20952. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.8.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-21020 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951593/HBASE-21020.HBASE-20952.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15275/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Determine WAL API changes for replication
> -
>
> Key: HBASE-21020
> URL: https://issues.apache.org/jira/browse/HBASE-21020
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: HBASE-20952
>
> Attachments: HBASE-21020.HBASE-20952.001.patch
>
>
> Spin-off of HBASE-20952.
> Ankit has started working on what he thinks a WAL API specifically for 
> Replication should look like. In his own words:
> {quote}
> At a high level, it looks,
>  * Need to abstract WAL name under WalInfo instead of Paths
>  * Abstract the WalEntryStream for FileSystem and Streaming system.
>  * Build WalStorage APIs to abstract operation on Wal.
>  * Provide the implementation of all above through corresponding WalProvider
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21599) Fix findbugs and javadoc warnings from HBASE-21246

2018-12-13 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21599:
---
Attachment: HBASE-21599.HBASE-20952.001.patch

> Fix findbugs and javadoc warnings from HBASE-21246
> --
>
> Key: HBASE-21599
> URL: https://issues.apache.org/jira/browse/HBASE-21599
> Project: HBase
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Trivial
> Fix For: HBASE-20952
>
> Attachments: HBASE-21599.HBASE-20952.001.patch
>
>
> {noformat}
> [WARNING] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java:125:
>  warning - Tag @link: can't find preLogRoll(Path) in 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager
> [WARNING] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java:125:
>  warning - Tag @link: can't find preLogRoll(Path) in 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager
> {noformat}
> and
> {noformat}
> org.apache.hadoop.hbase.wal.DisabledWALProvider$1.equals(Object) always 
> returns true{noformat}
> Pretty trivial stuff to clean up now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21599) Fix findbugs and javadoc warnings from HBASE-21246

2018-12-13 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21599:
---
Status: Patch Available  (was: Open)

FYI [~ankit.singhal]

> Fix findbugs and javadoc warnings from HBASE-21246
> --
>
> Key: HBASE-21599
> URL: https://issues.apache.org/jira/browse/HBASE-21599
> Project: HBase
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Trivial
> Fix For: HBASE-20952
>
> Attachments: HBASE-21599.HBASE-20952.001.patch
>
>
> {noformat}
> [WARNING] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java:125:
>  warning - Tag @link: can't find preLogRoll(Path) in 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager
> [WARNING] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSourceManager.java:125:
>  warning - Tag @link: can't find preLogRoll(Path) in 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager
> {noformat}
> and
> {noformat}
> org.apache.hadoop.hbase.wal.DisabledWALProvider$1.equals(Object) always 
> returns true{noformat}
> Pretty trivial stuff to clean up now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21048) Get LogLevel is not working from console in secure environment

2018-12-13 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720668#comment-16720668
 ] 

Hadoop QA commented on HBASE-21048:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 5s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
46s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} hbase-http generated 0 new + 15 unchanged - 2 fixed 
= 15 total (was 17) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
13s{color} | {color:red} hbase-http: The patch generated 1 new + 4 unchanged - 
4 fixed = 5 total (was 8) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
48s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 19s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hbase-http in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21048 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951738/HBASE-21048.master.002.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  shadedjars  
hadoopcheck  xml  compile  findbugs  hbaseanti  checkstyle  |
| uname | Linux e5e3cf0262bc 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 3ff274e22e |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/15274/artifact/patchprocess/diff-checkstyle-hbase-http.txt
 |
|  Test R

  1   2   3   >