[jira] [Commented] (HBASE-16417) In-Memory MemStore Policy for Flattening and Compactions

2017-03-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935785#comment-15935785
 ] 

Anoop Sam John commented on HBASE-16417:


Specially when we have CellChunkMap in place and doing merge on those segments. 
 I am more interested/worried in that area specifically.

> In-Memory MemStore Policy for Flattening and Compactions
> 
>
> Key: HBASE-16417
> URL: https://issues.apache.org/jira/browse/HBASE-16417
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Eshcar Hillel
> Fix For: 2.0.0
>
> Attachments: HBASE-16417-benchmarkresults-20161101.pdf, 
> HBASE-16417-benchmarkresults-20161110.pdf, 
> HBASE-16417-benchmarkresults-20161123.pdf, 
> HBASE-16417-benchmarkresults-20161205.pdf, 
> HBASE-16417-benchmarkresults-20170309.pdf, 
> HBASE-16417-benchmarkresults-20170317.pdf
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17799) HBCK region boundaries check can return false negatives when IOExceptions are thrown

2017-03-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935775#comment-15935775
 ] 

Hadoop QA commented on HBASE-17799:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 9s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 109m 49s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 149m 37s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859846/HBASE-17799.master.002.patch
 |
| JIRA Issue | HBASE-17799 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux f9b182998335 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 9410709 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6187/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6187/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6187/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> HBCK region boundaries check can return false negatives when IOExceptions are 
> thrown
> 
>
> Key: HBASE-17799
> URL: 

[jira] [Updated] (HBASE-14141) HBase Backup/Restore Phase 3: Filter WALs on backup to include only edits from backup tables

2017-03-21 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-14141:
--
Attachment: HBASE-14141.v2.patch

v2 patch on master.

> HBase Backup/Restore Phase 3: Filter WALs on backup to include only edits 
> from backup tables
> 
>
> Key: HBASE-14141
> URL: https://issues.apache.org/jira/browse/HBASE-14141
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
>  Labels: backup
> Fix For: HBASE-7912
>
> Attachments: HBASE-14141.HBASE-14123.v1.patch, HBASE-14141.v1.patch, 
> HBASE-14141.v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17757) Unify blocksize after encoding to decrease memory fragment

2017-03-21 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935708#comment-15935708
 ] 

Allan Yang commented on HBASE-17757:


Thanks for your opinion, [~anoopsamjohn], a release note is added

> Unify blocksize after encoding to decrease memory fragment 
> ---
>
> Key: HBASE-17757
> URL: https://issues.apache.org/jira/browse/HBASE-17757
> Project: HBase
>  Issue Type: New Feature
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-17757.patch, HBASE-17757v2.patch
>
>
> Usually, we store encoded block(uncompressed) in blockcache/bucketCache. 
> Though we have set the blocksize, after encoding, blocksize is varied. Varied 
> blocksize will cause memory fragment problem, which will result in more FGC 
> finally.In order to relief the memory fragment, This issue adjusts the 
> encoded block to a unified size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17757) Unify blocksize after encoding to decrease memory fragment

2017-03-21 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-17757:
---
Release Note: Blocksize is set in columnfamily's atrributes. It is used to 
control block sizes when generating blocks. But, it doesn't take encoding into 
count. If you set encoding to blocks, after encoding, the block size varies. 
Since blocks will be cached in memory after encoding (default), it will cause 
memory fragment if using blockcache, or decrease the pool efficiency if using 
bucketCache. This issue introduced a new config named 
'hbase.writer.unified.encoded.blocksize.ratio'. The default value of this 
config is 1, meaning doing nothing. If this value is set to a smaller value 
like 0.5, and the blocksize is set to 64KB(default value of blocksize). It will 
unify the blocksize after encoding to 64KB * 0.5 = 32KB. Unified blocksize will 
releaf the memory problems mentioned above.

> Unify blocksize after encoding to decrease memory fragment 
> ---
>
> Key: HBASE-17757
> URL: https://issues.apache.org/jira/browse/HBASE-17757
> Project: HBase
>  Issue Type: New Feature
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-17757.patch, HBASE-17757v2.patch
>
>
> Usually, we store encoded block(uncompressed) in blockcache/bucketCache. 
> Though we have set the blocksize, after encoding, blocksize is varied. Varied 
> blocksize will cause memory fragment problem, which will result in more FGC 
> finally.In order to relief the memory fragment, This issue adjusts the 
> encoded block to a unified size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17799) HBCK region boundaries check can return false negatives when IOExceptions are thrown

2017-03-21 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-17799:
--
Attachment: HBASE-17799.master.002.patch

removed unused comparator.

> HBCK region boundaries check can return false negatives when IOExceptions are 
> thrown
> 
>
> Key: HBASE-17799
> URL: https://issues.apache.org/jira/browse/HBASE-17799
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 2.0.0, 1.4.0, 1.3.1, 1.2.5
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Attachments: HBASE-17799.master.001.patch, 
> HBASE-17799.master.002.patch
>
>
> When enabled, HBaseFsck#checkRegionBoundaries will crawl all HFiles across 
> all namespaces and tables when {{-boundaries}} is specified. However if an 
> IOException is thrown by accessing a corrupt HFile, an un-handled HLink or by 
> any other reason, we will only log the exception and stop crawling the HFiles 
> and potentially reporting the wrong result.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17808) FastPath for RWQueueRpcExecutor

2017-03-21 Thread Allan Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935691#comment-15935691
 ] 

Allan Yang commented on HBASE-17808:


{quote}
Why FastPathRWQueueRpcExecutor not extends RWQueueRpcExecutor?
{quote}
Good question, FastPathRWQueueRpcExecutor  is backed by 
FastPathBalancedQueueRpcExecutor, it is also not good as RWQueueRpcExecutor in 
my tests. Still trying to figure out what's wrong.

> FastPath for RWQueueRpcExecutor
> ---
>
> Key: HBASE-17808
> URL: https://issues.apache.org/jira/browse/HBASE-17808
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-17808.patch, HBASE-17808.v2.patch
>
>
> FastPath for the FIFO rpcscheduler was introduced in HBASE-16023. But it is 
> not implemented for RW queues. In this issue, I use 
> FastPathBalancedQueueRpcExecutor in RW queues. So anyone who want to isolate 
> their read/write requests can also benefit from the fastpath.
> I haven't test the performance yet. But since I haven't change any of the 
> core implemention of FastPathBalancedQueueRpcExecutor, it should have the 
> same performance in HBASE-16023.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17809) cleanup unused class

2017-03-21 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935684#comment-15935684
 ] 

Chia-Ping Tsai commented on HBASE-17809:


All tests pass locally. These failed tests are unrelated to this jira.
Will commit it tomorrow if there are no objections.

> cleanup unused class
> 
>
> Key: HBASE-17809
> URL: https://issues.apache.org/jira/browse/HBASE-17809
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17809.v0.patch, HBASE-17809.v0.patch, 
> HBASE-17809.v0.patch
>
>
> inspired by HBASE-17805. We had left a lot of orphan class due to a bunch of 
> commits. We shall remove them.
> ||class||last meeting||why||line count|
> |LruHashMap|HBASE-1822|get rid of|1100|
> |ScannerTimeoutException|HBASE-16266|get rid of|45|
> |SortedCopyOnWriteSet|HBASE-12748|get rid of|178|
> |TestSortedCopyOnWriteSet|HBASE-12748|get rid of|107|
> |DelegatingRetryingCallable|HBASE-9049|create but never used|65|
> |LockTimeoutException|HBASE-16786|get rid of|44|
> |OperationConflictException|HBASE-9899|get rid of|50|
> |InvalidQuotaSettingsException|HBASE-11598|create but never used|33|
> |ShareableMemory|HBASE-15735|get rid of|40|
> |BoundedArrayQueue|HBASE-14860|get rid of|82|
> |TestBoundedArrayQueue|HBASE-14860|get rid of|61|
> |ChecksumFactory|HBASE-11927|get rid of|100|
> |TokenDepthComparator|HBASE-4676|create but never used|65|
> |RegionMergeTransaction|HBASE-17470|get rid of|249|
> |MetaUtils|HBASE-1822|get rid of|156|



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17808) FastPath for RWQueueRpcExecutor

2017-03-21 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935679#comment-15935679
 ] 

Guanghao Zhang commented on HBASE-17808:


Why FastPathRWQueueRpcExecutor not extends RWQueueRpcExecutor?

> FastPath for RWQueueRpcExecutor
> ---
>
> Key: HBASE-17808
> URL: https://issues.apache.org/jira/browse/HBASE-17808
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-17808.patch, HBASE-17808.v2.patch
>
>
> FastPath for the FIFO rpcscheduler was introduced in HBASE-16023. But it is 
> not implemented for RW queues. In this issue, I use 
> FastPathBalancedQueueRpcExecutor in RW queues. So anyone who want to isolate 
> their read/write requests can also benefit from the fastpath.
> I haven't test the performance yet. But since I haven't change any of the 
> core implemention of FastPathBalancedQueueRpcExecutor, it should have the 
> same performance in HBASE-16023.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17816) HRegion#mutateRowWithLocks should update writeRequestCount metric

2017-03-21 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935659#comment-15935659
 ] 

Chia-Ping Tsai commented on HBASE-17816:


are you planning to do any tests?

> HRegion#mutateRowWithLocks should update writeRequestCount metric
> -
>
> Key: HBASE-17816
> URL: https://issues.apache.org/jira/browse/HBASE-17816
> Project: HBase
>  Issue Type: Bug
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Attachments: HBASE-17816.master.001.patch
>
>
> Currently, all the calls that use HRegion#mutateRowWithLocks miss 
> writeRequestCount metric. The mutateRowWithLocks base method should update 
> the metric.
> Examples are checkAndMutate calls through RSRpcServices#multi, 
> Region#mutateRow api , MultiRowMutationProcessor coprocessor endpoint.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17807) correct the value of zookeeper.session.timeout in hbase doc

2017-03-21 Thread Yechao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935656#comment-15935656
 ] 

Yechao Chen commented on HBASE-17807:
-

[~tedyu] ,thanks for review

> correct the value of zookeeper.session.timeout in hbase doc
> ---
>
> Key: HBASE-17807
> URL: https://issues.apache.org/jira/browse/HBASE-17807
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yechao Chen
>Assignee: Yechao Chen
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-17807.patch
>
>
> I met a regionserver gc problem, and the regionserver log show me to read the 
> doc
> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> If you wish to increase the session timeout, add the following to your 
> hbase-site.xml to increase the timeout from the default of 60 seconds to 120 
> seconds.
> 
>   zookeeper.session.timeout
>   120
> 
> the value should be 12(120s) instead of 120(1200s)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17807) correct the value of zookeeper.session.timeout in hbase doc

2017-03-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17807:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

> correct the value of zookeeper.session.timeout in hbase doc
> ---
>
> Key: HBASE-17807
> URL: https://issues.apache.org/jira/browse/HBASE-17807
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yechao Chen
>Assignee: Yechao Chen
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-17807.patch
>
>
> I met a regionserver gc problem, and the regionserver log show me to read the 
> doc
> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> If you wish to increase the session timeout, add the following to your 
> hbase-site.xml to increase the timeout from the default of 60 seconds to 120 
> seconds.
> 
>   zookeeper.session.timeout
>   120
> 
> the value should be 12(120s) instead of 120(1200s)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17807) correct the value of zookeeper.session.timeout in hbase doc

2017-03-21 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935618#comment-15935618
 ] 

Ted Yu commented on HBASE-17807:


lgtm

> correct the value of zookeeper.session.timeout in hbase doc
> ---
>
> Key: HBASE-17807
> URL: https://issues.apache.org/jira/browse/HBASE-17807
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yechao Chen
>Assignee: Yechao Chen
>Priority: Trivial
> Attachments: HBASE-17807.patch
>
>
> I met a regionserver gc problem, and the regionserver log show me to read the 
> doc
> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> If you wish to increase the session timeout, add the following to your 
> hbase-site.xml to increase the timeout from the default of 60 seconds to 120 
> seconds.
> 
>   zookeeper.session.timeout
>   120
> 
> the value should be 12(120s) instead of 120(1200s)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17799) HBCK region boundaries check can return false negatives when IOExceptions are thrown

2017-03-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935606#comment-15935606
 ] 

Hadoop QA commented on HBASE-17799:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
51s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 31s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 12s 
{color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 114m 4s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 160m 5s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  Dead store to comparator in 
org.apache.hadoop.hbase.util.HBaseFsck.checkRegionBoundaries()  At 
HBaseFsck.java:org.apache.hadoop.hbase.util.HBaseFsck.checkRegionBoundaries()  
At HBaseFsck.java:[line 812] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859796/HBASE-17799.master.001.patch
 |
| JIRA Issue | HBASE-17799 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 7493bb17003b 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 11dc5bf |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6186/artifact/patchprocess/new-findbugs-hbase-server.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6186/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6186/testReport/ |
| modules | C: hbase-server U: 

[jira] [Commented] (HBASE-17807) correct the value of zookeeper.session.timeout in hbase doc

2017-03-21 Thread Yechao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935581#comment-15935581
 ] 

Yechao Chen commented on HBASE-17807:
-

[~yuzhih...@gmail.com] would you mind help me review this small patch?

> correct the value of zookeeper.session.timeout in hbase doc
> ---
>
> Key: HBASE-17807
> URL: https://issues.apache.org/jira/browse/HBASE-17807
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Reporter: Yechao Chen
>Assignee: Yechao Chen
>Priority: Trivial
> Attachments: HBASE-17807.patch
>
>
> I met a regionserver gc problem, and the regionserver log show me to read the 
> doc
> http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
> If you wish to increase the session timeout, add the following to your 
> hbase-site.xml to increase the timeout from the default of 60 seconds to 120 
> seconds.
> 
>   zookeeper.session.timeout
>   120
> 
> the value should be 12(120s) instead of 120(1200s)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17814) Move hbasecon site to hbase.apache.org

2017-03-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935560#comment-15935560
 ] 

stack commented on HBASE-17814:
---

Working on redirect now. Site is up at hbase.org/www.hbasecon.com/

> Move hbasecon site to hbase.apache.org
> --
>
> Key: HBASE-17814
> URL: https://issues.apache.org/jira/browse/HBASE-17814
> Project: HBase
>  Issue Type: Bug
>Reporter: stack
>
> Moving our hbasecon pages from a site that Cloudera sponsored.
> We want to be able to point hbasecon our new hosts for this year (and keep 
> around links to the old content which while it is all up on youtube and 
> slideshare, the hbasecon archive pages have the pointers).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17817) Make Regionservers log which tables it removed coprocessors from when aborting

2017-03-21 Thread Steen Manniche (JIRA)
Steen Manniche created HBASE-17817:
--

 Summary: Make Regionservers log which tables it removed 
coprocessors from when aborting
 Key: HBASE-17817
 URL: https://issues.apache.org/jira/browse/HBASE-17817
 Project: HBase
  Issue Type: Improvement
  Components: Coprocessors, regionserver
Affects Versions: 1.1.2
Reporter: Steen Manniche


When a coprocessor throws a runtime exception (e.g. NPE), the regionserver 
handles this according to {{hbase.coprocessor.abortonerror}}.

The output in the logs give no indication as to which table the coprocessor was 
removed from (or which version, or jarfile is the culprit). This causes longer 
debugging and recovery times.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17816) HRegion#mutateRowWithLocks should update writeRequestCount metric

2017-03-21 Thread Ashu Pachauri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashu Pachauri updated HBASE-17816:
--
Attachment: HBASE-17816.master.001.patch

> HRegion#mutateRowWithLocks should update writeRequestCount metric
> -
>
> Key: HBASE-17816
> URL: https://issues.apache.org/jira/browse/HBASE-17816
> Project: HBase
>  Issue Type: Bug
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
> Attachments: HBASE-17816.master.001.patch
>
>
> Currently, all the calls that use HRegion#mutateRowWithLocks miss 
> writeRequestCount metric. The mutateRowWithLocks base method should update 
> the metric.
> Examples are checkAndMutate calls through RSRpcServices#multi, 
> Region#mutateRow api , MultiRowMutationProcessor coprocessor endpoint.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-17816) HRegion#mutateRowWithLocks should update writeRequestCount metric

2017-03-21 Thread Ashu Pachauri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashu Pachauri reassigned HBASE-17816:
-

Assignee: Ashu Pachauri

> HRegion#mutateRowWithLocks should update writeRequestCount metric
> -
>
> Key: HBASE-17816
> URL: https://issues.apache.org/jira/browse/HBASE-17816
> Project: HBase
>  Issue Type: Bug
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
>
> Currently, all the calls that use HRegion#mutateRowWithLocks miss 
> writeRequestCount metric. The mutateRowWithLocks base method should update 
> the metric.
> Examples are checkAndMutate calls through RSRpcServices#multi, 
> Region#mutateRow api , MultiRowMutationProcessor coprocessor endpoint.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17809) cleanup unused class

2017-03-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935499#comment-15935499
 ] 

Hadoop QA commented on HBASE-17809:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 42s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 25s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
10s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 21s 
{color} | {color:red} hbase-prefix-tree in master has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 24s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 29s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 53s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 23s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hbase-prefix-tree in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 110m 43s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 113m 12s 
{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 
44s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 304m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.hbase.client.TestFromClientSide |
|   | org.apache.hadoop.hbase.client.TestHCM |
|   | 

[jira] [Created] (HBASE-17816) HRegion#mutateRowWithLocks should update writeRequestCount metric

2017-03-21 Thread Ashu Pachauri (JIRA)
Ashu Pachauri created HBASE-17816:
-

 Summary: HRegion#mutateRowWithLocks should update 
writeRequestCount metric
 Key: HBASE-17816
 URL: https://issues.apache.org/jira/browse/HBASE-17816
 Project: HBase
  Issue Type: Bug
Reporter: Ashu Pachauri


Currently, all the calls that use HRegion#mutateRowWithLocks miss 
writeRequestCount metric. The mutateRowWithLocks base method should update the 
metric.

Examples are checkAndMutate calls through RSRpcServices#multi, Region#mutateRow 
api , MultiRowMutationProcessor coprocessor endpoint.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17799) HBCK region boundaries check can return false negatives when IOExceptions are thrown

2017-03-21 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935467#comment-15935467
 ] 

Esteban Gutierrez commented on HBASE-17799:
---

The new output is something like this when running the the {{-boundaries}} flag:

{code}
2017-03-21 14:00:29,051 INFO  [main] util.HBaseFsck: Starting region boundaries 
check. It might take a while...
2017-03-21 14:00:29,059 INFO  [main] util.HBaseFsck: Scanning 6 (online) 
regions for integrity of store files and META boundaries.
2017-03-21 14:00:29,064 INFO  [main] hfile.CacheConfig: CacheConfig:disabled
2017-03-21 14:00:29,111 INFO  [main] hfile.CacheConfig: CacheConfig:disabled
ERROR: Region boundaries mismatch in 
TestTable,004000,1490031498711.4bc692a1f072260b0d9bf298c2af555f.
 and store 
file:/var/folders/br/gq9645xd17s6v7xgjsmcd2drgp/T/hbase-esteban/hbase/data/default/TestTable/4bc692a1f072260b0d9bf298c2af555f/info/57894cc7de074b16a0dd0bada4d508ea
2017-03-21 14:00:29,112 WARN  [main] util.HBaseFsck: 
file:/var/folders/br/gq9645xd17s6v7xgjsmcd2drgp/T/hbase-esteban/hbase/data/default/TestTable/4bc692a1f072260b0d9bf298c2af555f/info/57894cc7de074b16a0dd0bada4d508ea
 is not within boundaries: store: [004000 -> 
011999)region: [004000 -> 
008000)
2017-03-21 14:00:29,112 INFO  [main] hfile.CacheConfig: CacheConfig:disabled
2017-03-21 14:00:29,113 WARN  [main] util.HBaseFsck: Skipping 
file:/var/folders/br/gq9645xd17s6v7xgjsmcd2drgp/T/hbase-esteban/hbase/data/default/TestTable/4bc692a1f072260b0d9bf298c2af555f/info/empty
 from region 
TestTable,004000,1490031498711.4bc692a1f072260b0d9bf298c2af555f.
 got an exception.
org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile 
Trailer from file 
file:/var/folders/br/gq9645xd17s6v7xgjsmcd2drgp/T/hbase-esteban/hbase/data/default/TestTable/4bc692a1f072260b0d9bf298c2af555f/info/empty
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:474)
at org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:517)
at 
org.apache.hadoop.hbase.util.HBaseFsck.checkRegionBoundaries(HBaseFsck.java:706)
at org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:625)
at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:4433)
at 
org.apache.hadoop.hbase.util.HBaseFsck$HBaseFsckTool.run(HBaseFsck.java:4236)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:4224)
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.position(Buffer.java:244)
at 
org.apache.hadoop.hbase.io.hfile.FixedFileTrailer.readFromStream(FixedFileTrailer.java:395)
at 
org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:459)
... 8 more
2017-03-21 14:00:29,116 INFO  [main] hfile.CacheConfig: CacheConfig:disabled
2017-03-21 14:00:29,117 INFO  [main] hfile.CacheConfig: CacheConfig:disabled
2017-03-21 14:00:29,118 INFO  [main] hfile.CacheConfig: CacheConfig:disabled
2017-03-21 14:00:29,120 INFO  [main] hfile.CacheConfig: CacheConfig:disabled
2017-03-21 14:00:29,121 INFO  [main] util.HBaseFsck: Region boundaries test 
scanned: 6 files in 6 regions.
{code}

If there is an IOException on a file we log the exception now, and we skip the 
file and then we move to the next file. If we find an HFileLink we skip it now.


> HBCK region boundaries check can return false negatives when IOExceptions are 
> thrown
> 
>
> Key: HBASE-17799
> URL: https://issues.apache.org/jira/browse/HBASE-17799
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 2.0.0, 1.4.0, 1.3.1, 1.2.5
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Attachments: HBASE-17799.master.001.patch
>
>
> When enabled, HBaseFsck#checkRegionBoundaries will crawl all HFiles across 
> all namespaces and tables when {{-boundaries}} is specified. However if an 
> IOException is thrown by accessing a corrupt HFile, an un-handled HLink or by 
> any other reason, we will only log the exception and stop crawling the HFiles 
> and potentially reporting the wrong result.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17799) HBCK region boundaries check can return false negatives when IOExceptions are thrown

2017-03-21 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-17799:
--
Status: Patch Available  (was: Open)

Handles gracefully IOExceptions now. Also made some minor improvements like 
reporting the files that are out of range and use HRegionInfo.containsRange() 
instead of custom code.

> HBCK region boundaries check can return false negatives when IOExceptions are 
> thrown
> 
>
> Key: HBASE-17799
> URL: https://issues.apache.org/jira/browse/HBASE-17799
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 2.0.0, 1.4.0, 1.3.1, 1.2.5
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Attachments: HBASE-17799.master.001.patch
>
>
> When enabled, HBaseFsck#checkRegionBoundaries will crawl all HFiles across 
> all namespaces and tables when {{-boundaries}} is specified. However if an 
> IOException is thrown by accessing a corrupt HFile, an un-handled HLink or by 
> any other reason, we will only log the exception and stop crawling the HFiles 
> and potentially reporting the wrong result.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17799) HBCK region boundaries check can return false negatives when IOExceptions are thrown

2017-03-21 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-17799:
--
Attachment: HBASE-17799.master.001.patch

> HBCK region boundaries check can return false negatives when IOExceptions are 
> thrown
> 
>
> Key: HBASE-17799
> URL: https://issues.apache.org/jira/browse/HBASE-17799
> Project: HBase
>  Issue Type: Bug
>  Components: hbck
>Affects Versions: 2.0.0, 1.4.0, 1.3.1, 1.2.5
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Attachments: HBASE-17799.master.001.patch
>
>
> When enabled, HBaseFsck#checkRegionBoundaries will crawl all HFiles across 
> all namespaces and tables when {{-boundaries}} is specified. However if an 
> IOException is thrown by accessing a corrupt HFile, an un-handled HLink or by 
> any other reason, we will only log the exception and stop crawling the HFiles 
> and potentially reporting the wrong result.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14848) some hbase-* module don't have test/resources/log4j and test logs are empty

2017-03-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935375#comment-15935375
 ] 

Sean Busbey commented on HBASE-14848:
-

maybe the HBASE-14085 change removed a log4j.properties file contained in some 
test jar? I don't remember it attempting to do that.

> some hbase-* module don't have test/resources/log4j and test logs are empty
> ---
>
> Key: HBASE-14848
> URL: https://issues.apache.org/jira/browse/HBASE-14848
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.0, 1.2.0, 1.1.2, 1.3.0
>Reporter: Matteo Bertozzi
> Attachments: hbase-procedure-resources.patch
>
>
> some of the hbase sub modules (e.g. hbase-procedure, hbase-prefix-tree, ...) 
> don't have the test/resources/log4j.properties file which result in unit 
> tests not printing any information.
> adding the log4j seems to work, but in the past the debug output was visibile 
> even without the file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14085) Correct LICENSE and NOTICE files in artifacts

2017-03-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935373#comment-15935373
 ] 

Sean Busbey commented on HBASE-14085:
-

Appended-resources is used to build up documents. I haven't done eclipse stuff, 
but my understanding is that you should be able to use the maven-eclipse plugin 
to get eclipse to recognize needed changes to other plugins. (probably this 
needs a new jira though, since this one is closed)

> Correct LICENSE and NOTICE files in artifacts
> -
>
> Key: HBASE-14085
> URL: https://issues.apache.org/jira/browse/HBASE-14085
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 2.0.0, 0.94.28, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2
>
> Attachments: HBASE-14085-0.98-addendum.patch, HBASE-14085.1.patch, 
> HBASE-14085.2.patch, HBASE-14085.3.patch
>
>
> +Problems:
> * checked LICENSE/NOTICE on binary
> ** binary artifact LICENSE file has not been updated to include the 
> additional license terms for contained third party dependencies
> ** binary artifact NOTICE file does not include a copyright line
> ** binary artifact NOTICE file does not appear to propagate appropriate info 
> from the NOTICE files from bundled dependencies
> * checked NOTICE on source
> ** source artifact NOTICE file does not include a copyright line
> ** source NOTICE file includes notices for third party dependencies not 
> included in the artifact
> * checked NOTICE files shipped in maven jars
> ** copyright line only says 2015 when it's very likely the contents are under 
> copyright prior to this year
> * nit: NOTICE file on jars in maven say "HBase - ${module}" rather than 
> "Apache HBase - ${module}" as required 
> refs:
> http://www.apache.org/dev/licensing-howto.html#bundled-vs-non-bundled
> http://www.apache.org/dev/licensing-howto.html#binary
> http://www.apache.org/dev/licensing-howto.html#simple



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14161) Add hbase-spark integration tests to IT jenkins job

2017-03-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935360#comment-15935360
 ] 

Sean Busbey commented on HBASE-14161:
-

sounds good to me.

> Add hbase-spark integration tests to IT jenkins job
> ---
>
> Key: HBASE-14161
> URL: https://issues.apache.org/jira/browse/HBASE-14161
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Sean Busbey
> Fix For: 2.0.0
>
>
> expand the set of ITs we run to include the new hbase-spark tests.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17739) BucketCache is inefficient/wasteful/dumb in its bucket allocations

2017-03-21 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935224#comment-15935224
 ] 

Vladimir Rodionov commented on HBASE-17739:
---

{quote}
What was ur math V?
{quote}

BucketEntry has actually 80 bytes and BucketCacheKey - 48. Besides you need to 
add ConcurrentHashMap Map.Entry overhead for every block in a backing map, 
which is 48 bytes more =>

Only for backingMap we have 168 bytes overhead

blocksByHFile which is ConcurrentSkipListSet adds another 48 bytes (for 
Map.Entry) 

Total 216 bytes already. That is not 500 as I posted above, just miscalculated 
some overhead in IdReadWriteLock, but nevertheless - it is quite substantial.

 

> BucketCache is inefficient/wasteful/dumb in its bucket allocations
> --
>
> Key: HBASE-17739
> URL: https://issues.apache.org/jira/browse/HBASE-17739
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>
> By default we allocate 14 buckets with sizes from 5K to 513K. If lots of heap 
> given over to bucketcache and say no allocattions made for a particular 
> bucket size, this means we have a bunch of the bucketcache that just goes 
> idle/unused.
> For example, say heap is 100G. We'll divide it up among the sizes. If say we 
> only ever do 5k records, then most of the cache will go unused while the 
> allocation for 5k objects will see churn.
> Here is an old note of [~anoop.hbase]'s' from a conversation on bucket cache 
> we had offlist that describes the issue:
> "By default we have those 14 buckets with size range of 5K to 513K.
>   All sizes will have one bucket (with size 513*4) each except the
> last size.. ie. 513K sized many buckets will be there.  If we keep on
> writing only same sized blocks, we may loose all in btw sized buckets.
> Say we write only 4K sized blocks. We will 1st fill the bucket in 5K
> size. There is only one such bucket. Once this is filled, we will try
> to grab a complete free bucket from other sizes..  But we can not take
> it from 9K... 385K sized ones as there is only ONE bucket for these
> sizes.  We will take only from 513 size.. There are many in that...
> So we will eventually take all the buckets from 513 except the last
> one.. Ya it has to keep at least one in evey size.. So we will
> loose these much size.. They are of no use."
> We should set the size type on the fly as the records come in.
> Or better, we should choose record size on the fly. Here is another comment 
> from [~anoop.hbase]:
> "The second is the biggest contributor.  Suppose instead of 4K
> sized blocks, the user has 2 K sized blocks..  When we write a block to 
> bucket slot, we will reserve size equal to the allocated size for that block.
> So when we write 2K sized blocks (may be actual size a bit more than
> 2K ) we will take 5K with each of the block.  So u can see that we are
> loosing ~3K with every block. Means we are loosing more than half."
> He goes on: "If am 100% sure that all my table having 2K HFile block size, I 
> need to give this config a value 3 * 1024 (Exact 2 K if I give there may be
> again problem! That is another story we need to see how we can give
> more guarantee for the block size restriction HBASE-15248)..  So here also 
> ~1K loose for every 2K.. So some thing like a 30% loose !!! :-(“"
> So, we should figure the record sizes ourselves on the fly.
> Anything less has us wasting loads of cache space, nvm inefficiences lost 
> because of how we serialize base types to cache.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17813) backport HBASE-16983 to branch-1.3

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935221#comment-15935221
 ] 

Hudson commented on HBASE-17813:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #132 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/132/])
HBASE-17813 backport HBASE-16983 to branch-1.3 (liyu: rev 
ab335bf9d3d82100a875c796eea8e9532b9d2d7b)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableSnapshotInputFormat.java


> backport HBASE-16983 to branch-1.3
> --
>
> Key: HBASE-17813
> URL: https://issues.apache.org/jira/browse/HBASE-17813
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Yu Li
>Assignee: Yu Li
>  Labels: unit-test
> Fix For: 1.3.1
>
> Attachments: HBASE-17813.branch-1.3.patch, 
> HBASE-17813.branch-1.3.patch
>
>
> From [recent UT 
> report|https://builds.apache.org/job/PreCommit-HBASE-Build/6170/testReport/] 
> of branch-1.3, we could see the same issue "Unable to create region 
> directory..." as described by HBASE-16983, so we should backport the JIRA to 
> fix this intermittent failure and avoid it blocking new commits.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16983) TestMultiTableSnapshotInputFormat failing with Unable to create region directory: /tmp/...

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935222#comment-15935222
 ] 

Hudson commented on HBASE-16983:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #132 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/132/])
HBASE-17813 backport HBASE-16983 to branch-1.3 (liyu: rev 
ab335bf9d3d82100a875c796eea8e9532b9d2d7b)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableSnapshotInputFormat.java


> TestMultiTableSnapshotInputFormat failing with  Unable to create region 
> directory: /tmp/...
> ---
>
> Key: HBASE-16983
> URL: https://issues.apache.org/jira/browse/HBASE-16983
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16983.txt, HBASE-16983-ADDENDUM.patch, 
> HBASE-16983-ADDENDUM.patch, HBASE-16983-ADDENDUM.patch, 
> HBASE-16983-branch-1-ADDENDUM.patch, HBASE-16983-branch-1-ADDENDUM.patch
>
>
> Test is using /tmp. We failed creating dir in /tmp in a few tests from this 
> suite just now:
> https://builds.apache.org/job/PreCommit-HBASE-Build/4253/testReport/org.apache.hadoop.hbase.mapred/TestMultiTableSnapshotInputFormat/testScanOBBToOPP/
> {code}
> Caused by: java.io.IOException: Unable to create region directory: 
> /tmp/scantest2_snapshot__953e2b2d-22aa-4c6a-a46a-272619f5436e/data/default/scantest2/5629158a49e010e21ac0bd16453b2d8c
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createRegionOnFileSystem(HRegionFileSystem.java:896)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:6520)
>   at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegion(ModifyRegionUtils.java:205)
>   at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtils.java:173)
>   at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtils.java:170)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> ...
> {code}
> No more detail than this. Let me change it so creates stuff in the test dir 
> that it for sure owns/can write to.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17798) RpcServer.Listener.Reader can abort due to CancelledKeyException

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935215#comment-15935215
 ] 

Hudson commented on HBASE-17798:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2714 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2714/])
HBASE-17798 RpcServer.Listener.Reader can abort due to (tedyu: rev 
1cfd22bf43c9b64afae35d9bf16f764d0da80cab)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/SimpleRpcServer.java


> RpcServer.Listener.Reader can abort due to CancelledKeyException
> 
>
> Key: HBASE-17798
> URL: https://issues.apache.org/jira/browse/HBASE-17798
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.0, 1.2.4, 0.98.24
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 1.4.0, 2.0
>
> Attachments: 17798-master-v2.patch, connections.png, 
> HBASE-17798-0.98-v1.patch, HBASE-17798-0.98-v2.patch, 
> HBASE-17798-branch-1-v1.patch, HBASE-17798-branch-1-v2.patch, 
> HBASE-17798-master-v1.patch, HBASE-17798-master-v2.patch
>
>
> In our production cluster(0.98), some of the requests were unacceptable 
> because RpcServer.Listener.Reader were aborted.
> getReader() will return the next reader to deal with request.
> The implementation of getReader() as below:
> {code:title=RpcServer.java|borderStyle=solid}
> // The method that will return the next reader to work with
> // Simplistic implementation of round robin for now
> Reader getReader() {
>   currentReader = (currentReader + 1) % readers.length;
>   return readers[currentReader];
> }
> {code}
> If one of the readers abort, then it will lead to fall on the reader's 
> request will never be dealt with.
> Why does RpcServer.Listener.Reader abort?We add the debug log to get it.
> After a while, we got the following exception:
> {code}
> 2017-03-10 08:05:13,247 ERROR [RpcServer.reader=3,port=60020] ipc.RpcServer: 
> RpcServer.listener,port=60020: unexpectedly error in Reader(Throwable)
> java.nio.channels.CancelledKeyException
> at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73)
> at sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87)
> at java.nio.channels.SelectionKey.isReadable(SelectionKey.java:289)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:592)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:566)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> So, when deal with the request in reader, we should handle 
> CanceledKeyException.
> --
> versions 1.x and 2.0 will log and retrun when dealing with the 
> InterruptedException in Reader#doRunLoop after HBASE-10521. It will lead to 
> the same problem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17765) Reviving the merge possibility in the CompactingMemStore

2017-03-21 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935203#comment-15935203
 ] 

Edward Bortnikov commented on HBASE-17765:
--

Are we good to commit this patch? 

> Reviving the merge possibility in the CompactingMemStore
> 
>
> Key: HBASE-17765
> URL: https://issues.apache.org/jira/browse/HBASE-17765
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Fix For: 2.0.0
>
> Attachments: HBASE-17765-V01.patch, HBASE-17765-V02.patch
>
>
> According to the new performance results presented in the HBASE-16417 we see 
> that the read latency of the 90th percentile of the BASIC policy is too big 
> due to the need to traverse through too many segments in the pipeline. In 
> this JIRA we correct the bug in the merge sizing calculations and allow 
> pipeline size threshold to be a configurable parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-21 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17623:
---
Status: Patch Available  (was: Open)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v2.patch, HBASE-17623.v3.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-21 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17623:
---
Attachment: HBASE-17623.v3.patch

v3 addresses [~anoop.hbase]'s comment.

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v2.patch, HBASE-17623.v3.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-21 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17623:
---
Status: Open  (was: Patch Available)

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v2.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17815) Remove the unused field in PrefixTreeSeeker

2017-03-21 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935153#comment-15935153
 ] 

Chia-Ping Tsai commented on HBASE-17815:


It is a trivial patch...Will commit it tomorrow if no objections.

> Remove the unused field in PrefixTreeSeeker
> ---
>
> Key: HBASE-17815
> URL: https://issues.apache.org/jira/browse/HBASE-17815
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-17815.v0.patch
>
>
> The "block" is never used due to HBASE-12298. We should remove it to stop the 
> noise from FindBugs. (see HBASE-17664 and HBASE-17809)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-21 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935151#comment-15935151
 ] 

Chia-Ping Tsai commented on HBASE-17623:


bq. In HFileBlock I can see above javadoc. But we dont use the API 
getBlockForCaching() for caching a read block right.
The javadoc "is used when the block data has already been read" is written by 
HBASE-15366. And the "uncompressed" is broken by HBASE-11331. i'd think that 
the javadoc is out-of-date. i will correct it.

bq. Did some search in the code and seems we dont
Done. The block generated by getBlockForCaching() is cached only when writing 
blocks.

bq. Around this method add a note that this should be used only while writing 
blocks and caching and mention abt the copy we do
patch is coming soon.

Thanks for the comment. [~anoop.hbase]]


> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v2.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17707) New More Accurate Table Skew cost function/generator

2017-03-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935129#comment-15935129
 ] 

Enis Soztutar commented on HBASE-17707:
---

bq. and this behavior is even unit tested
Sorry, I do not see it in the v11 or v9 patch? Maybe I'm missing something? 
Most cost functions use the {{scale()}} function for doing a linear 
normalization to the 0-1 range. I did not see that used in your patch thats why 
I assumed that it is not normalized. However, a second look at the patch, you 
are doing the normalization in here I think:
{code}
+scaledSkewPerTable[table] = pathologicalNumMoves == 0 ? 0 : (double) 
numMovesPerTable[table] / pathologicalNumMoves;
{code}
Is it a hard guarantee that always {{numMovesPerTable}} >= 
{{pathologicalNumMoves}}? 

Thanks for debugging the tests. I think your suggestion for a min replica cost 
is fine, however, I am still curious to know whether we are affecting other 
behavior. Maybe the typical costs from old table skew function versus new table 
function is wildly different that causes the tests to fail? Did you attach the 
patch? 

We should still get these nice improvements to the table skew since it is one 
of the frequent problems with the current SLB today. Do you mind doing the 
changes suggested above. 


> New More Accurate Table Skew cost function/generator
> 
>
> Key: HBASE-17707
> URL: https://issues.apache.org/jira/browse/HBASE-17707
> Project: HBase
>  Issue Type: New Feature
>  Components: Balancer
>Affects Versions: 1.2.0
> Environment: CentOS Derivative with a derivative of the 3.18.43 
> kernel. HBase on CDH5.9.0 with some patches. HDFS CDH 5.9.0 with no patches.
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Minor
> Fix For: 2.0
>
> Attachments: HBASE-17707-00.patch, HBASE-17707-01.patch, 
> HBASE-17707-02.patch, HBASE-17707-03.patch, HBASE-17707-04.patch, 
> HBASE-17707-05.patch, HBASE-17707-06.patch, HBASE-17707-07.patch, 
> HBASE-17707-08.patch, HBASE-17707-09.patch, HBASE-17707-11.patch, 
> HBASE-17707-11.patch, test-balancer2-13617.out
>
>
> This patch includes new version of the TableSkewCostFunction and a new 
> TableSkewCandidateGenerator.
> The new TableSkewCostFunction computes table skew by counting the minimal 
> number of region moves required for a given table to perfectly balance the 
> table across the cluster (i.e. as if the regions from that table had been 
> round-robin-ed across the cluster). This number of moves is computer for each 
> table, then normalized to a score between 0-1 by dividing by the number of 
> moves required in the absolute worst case (i.e. the entire table is stored on 
> one server), and stored in an array. The cost function then takes a weighted 
> average of the average and maximum value across all tables. The weights in 
> this average are configurable to allow for certain users to more strongly 
> penalize situations where one table is skewed versus where every table is a 
> little bit skewed. To better spread this value more evenly across the range 
> 0-1, we take the square root of the weighted average to get the final value.
> The new TableSkewCandidateGenerator generates region moves/swaps to optimize 
> the above TableSkewCostFunction. It first simply tries to move regions until 
> each server has the right number of regions, then it swaps regions around 
> such that each region swap improves table skew across the cluster.
> We tested the cost function and generator in our production clusters with 
> 100s of TBs of data and 100s of tables across dozens of servers and found 
> both to be very performant and accurate.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17815) Remove the unused field in PrefixTreeSeeker

2017-03-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15935020#comment-15935020
 ] 

Hadoop QA commented on HBASE-17815:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
3s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 19s 
{color} | {color:red} hbase-prefix-tree in master has 1 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 2s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
26s {color} | {color:green} hbase-prefix-tree generated 0 new + 0 unchanged - 1 
fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s 
{color} | {color:green} hbase-prefix-tree in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
6s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 12s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859767/HBASE-17815.v0.patch |
| JIRA Issue | HBASE-17815 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux a5c3bd6ad0c4 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 1cfd22b |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6183/artifact/patchprocess/branch-findbugs-hbase-prefix-tree-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6183/testReport/ |
| modules | C: hbase-prefix-tree U: hbase-prefix-tree |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6183/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Remove the unused field in PrefixTreeSeeker
> ---
>
> 

[jira] [Commented] (HBASE-17757) Unify blocksize after encoding to decrease memory fragment

2017-03-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934988#comment-15934988
 ] 

Anoop Sam John commented on HBASE-17757:


In this patch u make the default ratio to be 1 only right.  Am ok with this 
way.  +1 on patch. Ya this provides a way to better control the block size.
Has to be well explained. Mind adding a release note?  Also should be explained 
in book.


> Unify blocksize after encoding to decrease memory fragment 
> ---
>
> Key: HBASE-17757
> URL: https://issues.apache.org/jira/browse/HBASE-17757
> Project: HBase
>  Issue Type: New Feature
>Reporter: Allan Yang
>Assignee: Allan Yang
> Attachments: HBASE-17757.patch, HBASE-17757v2.patch
>
>
> Usually, we store encoded block(uncompressed) in blockcache/bucketCache. 
> Though we have set the blocksize, after encoding, blocksize is varied. Varied 
> blocksize will cause memory fragment problem, which will result in more FGC 
> finally.In order to relief the memory fragment, This issue adjusts the 
> encoded block to a unified size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17809) cleanup unused class

2017-03-21 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17809:
---
Status: Patch Available  (was: Open)

> cleanup unused class
> 
>
> Key: HBASE-17809
> URL: https://issues.apache.org/jira/browse/HBASE-17809
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17809.v0.patch, HBASE-17809.v0.patch, 
> HBASE-17809.v0.patch
>
>
> inspired by HBASE-17805. We had left a lot of orphan class due to a bunch of 
> commits. We shall remove them.
> ||class||last meeting||why||line count|
> |LruHashMap|HBASE-1822|get rid of|1100|
> |ScannerTimeoutException|HBASE-16266|get rid of|45|
> |SortedCopyOnWriteSet|HBASE-12748|get rid of|178|
> |TestSortedCopyOnWriteSet|HBASE-12748|get rid of|107|
> |DelegatingRetryingCallable|HBASE-9049|create but never used|65|
> |LockTimeoutException|HBASE-16786|get rid of|44|
> |OperationConflictException|HBASE-9899|get rid of|50|
> |InvalidQuotaSettingsException|HBASE-11598|create but never used|33|
> |ShareableMemory|HBASE-15735|get rid of|40|
> |BoundedArrayQueue|HBASE-14860|get rid of|82|
> |TestBoundedArrayQueue|HBASE-14860|get rid of|61|
> |ChecksumFactory|HBASE-11927|get rid of|100|
> |TokenDepthComparator|HBASE-4676|create but never used|65|
> |RegionMergeTransaction|HBASE-17470|get rid of|249|
> |MetaUtils|HBASE-1822|get rid of|156|



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17809) cleanup unused class

2017-03-21 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17809:
---
Status: Open  (was: Patch Available)

retry.
The Findbugs warning is traced by HBASE-17815

> cleanup unused class
> 
>
> Key: HBASE-17809
> URL: https://issues.apache.org/jira/browse/HBASE-17809
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17809.v0.patch, HBASE-17809.v0.patch, 
> HBASE-17809.v0.patch
>
>
> inspired by HBASE-17805. We had left a lot of orphan class due to a bunch of 
> commits. We shall remove them.
> ||class||last meeting||why||line count|
> |LruHashMap|HBASE-1822|get rid of|1100|
> |ScannerTimeoutException|HBASE-16266|get rid of|45|
> |SortedCopyOnWriteSet|HBASE-12748|get rid of|178|
> |TestSortedCopyOnWriteSet|HBASE-12748|get rid of|107|
> |DelegatingRetryingCallable|HBASE-9049|create but never used|65|
> |LockTimeoutException|HBASE-16786|get rid of|44|
> |OperationConflictException|HBASE-9899|get rid of|50|
> |InvalidQuotaSettingsException|HBASE-11598|create but never used|33|
> |ShareableMemory|HBASE-15735|get rid of|40|
> |BoundedArrayQueue|HBASE-14860|get rid of|82|
> |TestBoundedArrayQueue|HBASE-14860|get rid of|61|
> |ChecksumFactory|HBASE-11927|get rid of|100|
> |TokenDepthComparator|HBASE-4676|create but never used|65|
> |RegionMergeTransaction|HBASE-17470|get rid of|249|
> |MetaUtils|HBASE-1822|get rid of|156|



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17809) cleanup unused class

2017-03-21 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17809:
---
Attachment: HBASE-17809.v0.patch

> cleanup unused class
> 
>
> Key: HBASE-17809
> URL: https://issues.apache.org/jira/browse/HBASE-17809
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17809.v0.patch, HBASE-17809.v0.patch, 
> HBASE-17809.v0.patch
>
>
> inspired by HBASE-17805. We had left a lot of orphan class due to a bunch of 
> commits. We shall remove them.
> ||class||last meeting||why||line count|
> |LruHashMap|HBASE-1822|get rid of|1100|
> |ScannerTimeoutException|HBASE-16266|get rid of|45|
> |SortedCopyOnWriteSet|HBASE-12748|get rid of|178|
> |TestSortedCopyOnWriteSet|HBASE-12748|get rid of|107|
> |DelegatingRetryingCallable|HBASE-9049|create but never used|65|
> |LockTimeoutException|HBASE-16786|get rid of|44|
> |OperationConflictException|HBASE-9899|get rid of|50|
> |InvalidQuotaSettingsException|HBASE-11598|create but never used|33|
> |ShareableMemory|HBASE-15735|get rid of|40|
> |BoundedArrayQueue|HBASE-14860|get rid of|82|
> |TestBoundedArrayQueue|HBASE-14860|get rid of|61|
> |ChecksumFactory|HBASE-11927|get rid of|100|
> |TokenDepthComparator|HBASE-4676|create but never used|65|
> |RegionMergeTransaction|HBASE-17470|get rid of|249|
> |MetaUtils|HBASE-1822|get rid of|156|



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17815) Remove the unused field in PrefixTreeSeeker

2017-03-21 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17815:
---
Status: Patch Available  (was: Open)

> Remove the unused field in PrefixTreeSeeker
> ---
>
> Key: HBASE-17815
> URL: https://issues.apache.org/jira/browse/HBASE-17815
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-17815.v0.patch
>
>
> The "block" is never used due to HBASE-12298. We should remove it to stop the 
> noise from FindBugs. (see HBASE-17664 and HBASE-17809)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17815) Remove the unused field in PrefixTreeSeeker

2017-03-21 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17815:
---
Attachment: HBASE-17815.v0.patch

> Remove the unused field in PrefixTreeSeeker
> ---
>
> Key: HBASE-17815
> URL: https://issues.apache.org/jira/browse/HBASE-17815
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-17815.v0.patch
>
>
> The "block" is never used due to HBASE-12298. We should remove it to stop the 
> noise from FindBugs. (see HBASE-17664 and HBASE-17809)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17815) Remove the unused field in PrefixTreeSeeker

2017-03-21 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17815:
---
Affects Version/s: 2.0.0

> Remove the unused field in PrefixTreeSeeker
> ---
>
> Key: HBASE-17815
> URL: https://issues.apache.org/jira/browse/HBASE-17815
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-17815.v0.patch
>
>
> The "block" is never used due to HBASE-12298. We should remove it to stop the 
> noise from FindBugs. (see HBASE-17664 and HBASE-17809)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17815) Remove the unused field in PrefixTreeSeeker

2017-03-21 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17815:
---
Fix Version/s: 2.0.0

> Remove the unused field in PrefixTreeSeeker
> ---
>
> Key: HBASE-17815
> URL: https://issues.apache.org/jira/browse/HBASE-17815
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Trivial
> Fix For: 2.0.0
>
>
> The "block" is never used due to HBASE-12298. We should remove it to stop the 
> noise from FindBugs. (see HBASE-17664 and HBASE-17809)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17815) Remove the unused field in PrefixTreeSeeker

2017-03-21 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created HBASE-17815:
--

 Summary: Remove the unused field in PrefixTreeSeeker
 Key: HBASE-17815
 URL: https://issues.apache.org/jira/browse/HBASE-17815
 Project: HBase
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai
Priority: Trivial


The "block" is never used due to HBASE-12298. We should remove it to stop the 
noise from FindBugs. (see HBASE-17664 and HBASE-17809)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17623) Reuse the bytes array when building the hfile block

2017-03-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934954#comment-15934954
 ] 

Anoop Sam John commented on HBASE-17623:


Patch looks good.
{code}
   * Creates a new {@link HFile} block from the given fields. This constructor
   * is used when the block data has already been read and uncompressed,
   * and is sitting in a byte buffer and we want to stuff the block into cache.
   * See {@link Writer#getBlockForCaching(CacheConfig)}.
   *
{code}
In HFileBlock I can see above javadoc.  But we dont use the API 
getBlockForCaching() for caching a read block right.  Did some search in the 
code and seems we dont. Pls double check.  Pls correct the javadoc.  Around 
this method add a note that this should be used only while writing blocks and 
caching and mention abt the copy we do. Any way the name says it. Just to be 
sure for future ref.

> Reuse the bytes array when building the hfile block
> ---
>
> Key: HBASE-17623
> URL: https://issues.apache.org/jira/browse/HBASE-17623
> Project: HBase
>  Issue Type: Improvement
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0
>
> Attachments: after(snappy_hfilesize=5.04GB).png, 
> after(snappy_hfilesize=755MB).png, before(snappy_hfilesize=5.04GB).png, 
> before(snappy_hfilesize=755MB).png, GC measurement.xlsx, 
> HBASE-17623.branch-1.v0.patch, HBASE-17623.branch-1.v1.patch, 
> HBASE-17623.branch-1.v2.patch, HBASE-17623.branch-1.v2.patch, 
> HBASE-17623.v0.patch, HBASE-17623.v1.patch, HBASE-17623.v1.patch, 
> HBASE-17623.v2.patch, memory allocation measurement.xlsx
>
>
> There are two improvements.
> # The onDiskBlockBytesWithHeader should maintain a bytes array which can be 
> reused when building the hfile.
> # The onDiskBlockBytesWithHeader is copied to an new bytes array only when we 
> need to cache the block.
> # If no block need to be cached, the uncompressedBlockBytesWithHeader will 
> never be created.
> {code:title=HFileBlock.java|borderStyle=solid}
> private void finishBlock() throws IOException {
>   if (blockType == BlockType.DATA) {
> this.dataBlockEncoder.endBlockEncoding(dataBlockEncodingCtx, 
> userDataStream,
> baosInMemory.getBuffer(), blockType);
> blockType = dataBlockEncodingCtx.getBlockType();
>   }
>   userDataStream.flush();
>   // This does an array copy, so it is safe to cache this byte array when 
> cache-on-write.
>   // Header is still the empty, 'dummy' header that is yet to be filled 
> out.
>   uncompressedBlockBytesWithHeader = baosInMemory.toByteArray();
>   prevOffset = prevOffsetByType[blockType.getId()];
>   // We need to set state before we can package the block up for 
> cache-on-write. In a way, the
>   // block is ready, but not yet encoded or compressed.
>   state = State.BLOCK_READY;
>   if (blockType == BlockType.DATA || blockType == BlockType.ENCODED_DATA) 
> {
> onDiskBlockBytesWithHeader = dataBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   } else {
> onDiskBlockBytesWithHeader = defaultBlockEncodingCtx.
> compressAndEncrypt(uncompressedBlockBytesWithHeader);
>   }
>   // Calculate how many bytes we need for checksum on the tail of the 
> block.
>   int numBytes = (int) ChecksumUtil.numBytes(
>   onDiskBlockBytesWithHeader.length,
>   fileContext.getBytesPerChecksum());
>   // Put the header for the on disk bytes; header currently is 
> unfilled-out
>   putHeader(onDiskBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   // Set the header for the uncompressed bytes (for cache-on-write) -- 
> IFF different from
>   // onDiskBlockBytesWithHeader array.
>   if (onDiskBlockBytesWithHeader != uncompressedBlockBytesWithHeader) {
> putHeader(uncompressedBlockBytesWithHeader, 0,
>   onDiskBlockBytesWithHeader.length + numBytes,
>   uncompressedBlockBytesWithHeader.length, 
> onDiskBlockBytesWithHeader.length);
>   }
>   if (onDiskChecksum.length != numBytes) {
> onDiskChecksum = new byte[numBytes];
>   }
>   ChecksumUtil.generateChecksums(
>   onDiskBlockBytesWithHeader, 0, onDiskBlockBytesWithHeader.length,
>   onDiskChecksum, 0, fileContext.getChecksumType(), 
> fileContext.getBytesPerChecksum());
> }{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17798) RpcServer.Listener.Reader can abort due to CancelledKeyException

2017-03-21 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17798:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0
   1.4.0
   Status: Resolved  (was: Patch Available)

Thanks for the patch, Guangxu.

branch-1.3 is in quiet period. Resolving for now.

> RpcServer.Listener.Reader can abort due to CancelledKeyException
> 
>
> Key: HBASE-17798
> URL: https://issues.apache.org/jira/browse/HBASE-17798
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.0, 1.2.4, 0.98.24
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Fix For: 1.4.0, 2.0
>
> Attachments: 17798-master-v2.patch, connections.png, 
> HBASE-17798-0.98-v1.patch, HBASE-17798-0.98-v2.patch, 
> HBASE-17798-branch-1-v1.patch, HBASE-17798-branch-1-v2.patch, 
> HBASE-17798-master-v1.patch, HBASE-17798-master-v2.patch
>
>
> In our production cluster(0.98), some of the requests were unacceptable 
> because RpcServer.Listener.Reader were aborted.
> getReader() will return the next reader to deal with request.
> The implementation of getReader() as below:
> {code:title=RpcServer.java|borderStyle=solid}
> // The method that will return the next reader to work with
> // Simplistic implementation of round robin for now
> Reader getReader() {
>   currentReader = (currentReader + 1) % readers.length;
>   return readers[currentReader];
> }
> {code}
> If one of the readers abort, then it will lead to fall on the reader's 
> request will never be dealt with.
> Why does RpcServer.Listener.Reader abort?We add the debug log to get it.
> After a while, we got the following exception:
> {code}
> 2017-03-10 08:05:13,247 ERROR [RpcServer.reader=3,port=60020] ipc.RpcServer: 
> RpcServer.listener,port=60020: unexpectedly error in Reader(Throwable)
> java.nio.channels.CancelledKeyException
> at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73)
> at sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87)
> at java.nio.channels.SelectionKey.isReadable(SelectionKey.java:289)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:592)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:566)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> So, when deal with the request in reader, we should handle 
> CanceledKeyException.
> --
> versions 1.x and 2.0 will log and retrun when dealing with the 
> InterruptedException in Reader#doRunLoop after HBASE-10521. It will lead to 
> the same problem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17798) RpcServer.Listener.Reader can abort due to CancelledKeyException

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934929#comment-15934929
 ] 

Hudson commented on HBASE-17798:


FAILURE: Integrated in Jenkins build HBase-1.4 #678 (See 
[https://builds.apache.org/job/HBase-1.4/678/])
HBASE-17798 RpcServer.Listener.Reader can abort due to (tedyu: rev 
9726c71681c0b8b22e83b056102803646b8d50c2)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java


> RpcServer.Listener.Reader can abort due to CancelledKeyException
> 
>
> Key: HBASE-17798
> URL: https://issues.apache.org/jira/browse/HBASE-17798
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.0, 1.2.4, 0.98.24
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Attachments: 17798-master-v2.patch, connections.png, 
> HBASE-17798-0.98-v1.patch, HBASE-17798-0.98-v2.patch, 
> HBASE-17798-branch-1-v1.patch, HBASE-17798-branch-1-v2.patch, 
> HBASE-17798-master-v1.patch, HBASE-17798-master-v2.patch
>
>
> In our production cluster(0.98), some of the requests were unacceptable 
> because RpcServer.Listener.Reader were aborted.
> getReader() will return the next reader to deal with request.
> The implementation of getReader() as below:
> {code:title=RpcServer.java|borderStyle=solid}
> // The method that will return the next reader to work with
> // Simplistic implementation of round robin for now
> Reader getReader() {
>   currentReader = (currentReader + 1) % readers.length;
>   return readers[currentReader];
> }
> {code}
> If one of the readers abort, then it will lead to fall on the reader's 
> request will never be dealt with.
> Why does RpcServer.Listener.Reader abort?We add the debug log to get it.
> After a while, we got the following exception:
> {code}
> 2017-03-10 08:05:13,247 ERROR [RpcServer.reader=3,port=60020] ipc.RpcServer: 
> RpcServer.listener,port=60020: unexpectedly error in Reader(Throwable)
> java.nio.channels.CancelledKeyException
> at sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73)
> at sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87)
> at java.nio.channels.SelectionKey.isReadable(SelectionKey.java:289)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.doRunLoop(RpcServer.java:592)
> at 
> org.apache.hadoop.hbase.ipc.RpcServer$Listener$Reader.run(RpcServer.java:566)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}
> So, when deal with the request in reader, we should handle 
> CanceledKeyException.
> --
> versions 1.x and 2.0 will log and retrun when dealing with the 
> InterruptedException in Reader#doRunLoop after HBASE-10521. It will lead to 
> the same problem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17020) keylen in midkey() dont computed correctly

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934876#comment-15934876
 ] 

Hudson commented on HBASE-17020:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK8 #141 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/141/])
HBASE-17070 backport HBASE-17020 (keylen in midkey() dont computed (liyu: rev 
a60792425a50de48d6af88ff2737b5e32413de8a)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java


> keylen in midkey() dont computed correctly
> --
>
> Key: HBASE-17020
> URL: https://issues.apache.org/jira/browse/HBASE-17020
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4
>Reporter: Yu Sun
>Assignee: Yu Sun
> Fix For: 2.0.0, 1.4.0, 1.2.5, 0.98.24, 1.1.8
>
> Attachments: HBASE-17020-branch-0.98.patch, 
> HBASE-17020.branch-0.98.patch, HBASE-17020.branch-0.98.patch, 
> HBASE-17020.branch-1.1.patch, HBASE-17020-v1.patch, HBASE-17020-v2.patch, 
> HBASE-17020-v2.patch, HBASE-17020-v3-branch1.1.patch
>
>
> in CellBasedKeyBlockIndexReader.midkey():
> {code}
>   ByteBuff b = midLeafBlock.getBufferWithoutHeader();
>   int numDataBlocks = b.getIntAfterPosition(0);
>   int keyRelOffset = b.getIntAfterPosition(Bytes.SIZEOF_INT * 
> (midKeyEntry + 1));
>   int keyLen = b.getIntAfterPosition(Bytes.SIZEOF_INT * (midKeyEntry 
> + 2)) - keyRelOffset;
> {code}
> the local varible keyLen get this should be total length of: 
> SECONDARY_INDEX_ENTRY_OVERHEAD  + firstKey.length;
> the code is:
> {code}
> void add(byte[] firstKey, long blockOffset, int onDiskDataSize,
> long curTotalNumSubEntries) {
>   // Record the offset for the secondary index
>   secondaryIndexOffsetMarks.add(curTotalNonRootEntrySize);
>   curTotalNonRootEntrySize += SECONDARY_INDEX_ENTRY_OVERHEAD
>   + firstKey.length;
> {code}
> when the midkey last entry of a leaf-level index block, this may throw:
> {quote}
> 2016-10-01 12:27:55,186 ERROR [MemStoreFlusher.0] 
> regionserver.MemStoreFlusher: Cache flusher failed for entry [flush region 
> pora_6_item_feature,0061:,1473838922457.12617bc4ebbfd171018bf96ac9bdd2a7.]
> java.lang.ArrayIndexOutOfBoundsException
> at 
> org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray(ByteBufferUtils.java:936)
> at 
> org.apache.hadoop.hbase.nio.SingleByteBuff.toBytes(SingleByteBuff.java:303)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.midkey(HFileBlockIndex.java:419)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.midkey(HFileReaderImpl.java:1519)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.midkey(StoreFile.java:1520)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.getFileSplitPoint(StoreFile.java:706)
> at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreFileManager.getSplitPoint(DefaultStoreFileManager.java:126)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getSplitPoint(HStore.java:1983)
> at 
> org.apache.hadoop.hbase.regionserver.ConstantFamilySizeRegionSplitPolicy.getSplitPoint(ConstantFamilySizeRegionSplitPolicy.java:77)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkSplit(HRegion.java:7756)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:513)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:471)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:75)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259)
> at java.lang.Thread.run(Thread.java:756)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16983) TestMultiTableSnapshotInputFormat failing with Unable to create region directory: /tmp/...

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934878#comment-15934878
 ] 

Hudson commented on HBASE-16983:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK8 #141 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/141/])
HBASE-17813 backport HBASE-16983 to branch-1.3 (liyu: rev 
ab335bf9d3d82100a875c796eea8e9532b9d2d7b)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableSnapshotInputFormat.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java


> TestMultiTableSnapshotInputFormat failing with  Unable to create region 
> directory: /tmp/...
> ---
>
> Key: HBASE-16983
> URL: https://issues.apache.org/jira/browse/HBASE-16983
> Project: HBase
>  Issue Type: Bug
>  Components: test
>Reporter: stack
>Assignee: stack
>Priority: Minor
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16983.txt, HBASE-16983-ADDENDUM.patch, 
> HBASE-16983-ADDENDUM.patch, HBASE-16983-ADDENDUM.patch, 
> HBASE-16983-branch-1-ADDENDUM.patch, HBASE-16983-branch-1-ADDENDUM.patch
>
>
> Test is using /tmp. We failed creating dir in /tmp in a few tests from this 
> suite just now:
> https://builds.apache.org/job/PreCommit-HBASE-Build/4253/testReport/org.apache.hadoop.hbase.mapred/TestMultiTableSnapshotInputFormat/testScanOBBToOPP/
> {code}
> Caused by: java.io.IOException: Unable to create region directory: 
> /tmp/scantest2_snapshot__953e2b2d-22aa-4c6a-a46a-272619f5436e/data/default/scantest2/5629158a49e010e21ac0bd16453b2d8c
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createRegionOnFileSystem(HRegionFileSystem.java:896)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:6520)
>   at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegion(ModifyRegionUtils.java:205)
>   at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtils.java:173)
>   at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtils.java:170)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> ...
> {code}
> No more detail than this. Let me change it so creates stuff in the test dir 
> that it for sure owns/can write to.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17809) cleanup unused class

2017-03-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934874#comment-15934874
 ] 

Anoop Sam John commented on HBASE-17809:


Good on you for the cleanup. +1

> cleanup unused class
> 
>
> Key: HBASE-17809
> URL: https://issues.apache.org/jira/browse/HBASE-17809
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17809.v0.patch, HBASE-17809.v0.patch
>
>
> inspired by HBASE-17805. We had left a lot of orphan class due to a bunch of 
> commits. We shall remove them.
> ||class||last meeting||why||line count|
> |LruHashMap|HBASE-1822|get rid of|1100|
> |ScannerTimeoutException|HBASE-16266|get rid of|45|
> |SortedCopyOnWriteSet|HBASE-12748|get rid of|178|
> |TestSortedCopyOnWriteSet|HBASE-12748|get rid of|107|
> |DelegatingRetryingCallable|HBASE-9049|create but never used|65|
> |LockTimeoutException|HBASE-16786|get rid of|44|
> |OperationConflictException|HBASE-9899|get rid of|50|
> |InvalidQuotaSettingsException|HBASE-11598|create but never used|33|
> |ShareableMemory|HBASE-15735|get rid of|40|
> |BoundedArrayQueue|HBASE-14860|get rid of|82|
> |TestBoundedArrayQueue|HBASE-14860|get rid of|61|
> |ChecksumFactory|HBASE-11927|get rid of|100|
> |TokenDepthComparator|HBASE-4676|create but never used|65|
> |RegionMergeTransaction|HBASE-17470|get rid of|249|
> |MetaUtils|HBASE-1822|get rid of|156|



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17813) backport HBASE-16983 to branch-1.3

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934877#comment-15934877
 ] 

Hudson commented on HBASE-17813:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK8 #141 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/141/])
HBASE-17813 backport HBASE-16983 to branch-1.3 (liyu: rev 
ab335bf9d3d82100a875c796eea8e9532b9d2d7b)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFileInfo.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestMultiTableSnapshotInputFormat.java


> backport HBASE-16983 to branch-1.3
> --
>
> Key: HBASE-17813
> URL: https://issues.apache.org/jira/browse/HBASE-17813
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Yu Li
>Assignee: Yu Li
>  Labels: unit-test
> Fix For: 1.3.1
>
> Attachments: HBASE-17813.branch-1.3.patch, 
> HBASE-17813.branch-1.3.patch
>
>
> From [recent UT 
> report|https://builds.apache.org/job/PreCommit-HBASE-Build/6170/testReport/] 
> of branch-1.3, we could see the same issue "Unable to create region 
> directory..." as described by HBASE-16983, so we should backport the JIRA to 
> fix this intermittent failure and avoid it blocking new commits.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17070) backport HBASE-17020 to 1.3.1

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934875#comment-15934875
 ] 

Hudson commented on HBASE-17070:


FAILURE: Integrated in Jenkins build HBase-1.3-JDK8 #141 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/141/])
HBASE-17070 backport HBASE-17020 (keylen in midkey() dont computed (liyu: rev 
a60792425a50de48d6af88ff2737b5e32413de8a)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java


> backport HBASE-17020 to 1.3.1
> -
>
> Key: HBASE-17070
> URL: https://issues.apache.org/jira/browse/HBASE-17070
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.3.1
>
> Attachments: HBASE-17070.branch-1.3.patch, 
> HBASE-17070.branch-1.3.patch, HBASE-17070.branch-1.3.patch
>
>
> As titled, backport HBASE-17020 after 1.3.0 got released.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17020) keylen in midkey() dont computed correctly

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934859#comment-15934859
 ] 

Hudson commented on HBASE-17020:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #131 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/131/])
HBASE-17070 backport HBASE-17020 (keylen in midkey() dont computed (liyu: rev 
a60792425a50de48d6af88ff2737b5e32413de8a)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java


> keylen in midkey() dont computed correctly
> --
>
> Key: HBASE-17020
> URL: https://issues.apache.org/jira/browse/HBASE-17020
> Project: HBase
>  Issue Type: Bug
>  Components: HFile
>Affects Versions: 2.0.0, 1.3.0, 1.4.0, 1.1.7, 0.98.23, 1.2.4
>Reporter: Yu Sun
>Assignee: Yu Sun
> Fix For: 2.0.0, 1.4.0, 1.2.5, 0.98.24, 1.1.8
>
> Attachments: HBASE-17020-branch-0.98.patch, 
> HBASE-17020.branch-0.98.patch, HBASE-17020.branch-0.98.patch, 
> HBASE-17020.branch-1.1.patch, HBASE-17020-v1.patch, HBASE-17020-v2.patch, 
> HBASE-17020-v2.patch, HBASE-17020-v3-branch1.1.patch
>
>
> in CellBasedKeyBlockIndexReader.midkey():
> {code}
>   ByteBuff b = midLeafBlock.getBufferWithoutHeader();
>   int numDataBlocks = b.getIntAfterPosition(0);
>   int keyRelOffset = b.getIntAfterPosition(Bytes.SIZEOF_INT * 
> (midKeyEntry + 1));
>   int keyLen = b.getIntAfterPosition(Bytes.SIZEOF_INT * (midKeyEntry 
> + 2)) - keyRelOffset;
> {code}
> the local varible keyLen get this should be total length of: 
> SECONDARY_INDEX_ENTRY_OVERHEAD  + firstKey.length;
> the code is:
> {code}
> void add(byte[] firstKey, long blockOffset, int onDiskDataSize,
> long curTotalNumSubEntries) {
>   // Record the offset for the secondary index
>   secondaryIndexOffsetMarks.add(curTotalNonRootEntrySize);
>   curTotalNonRootEntrySize += SECONDARY_INDEX_ENTRY_OVERHEAD
>   + firstKey.length;
> {code}
> when the midkey last entry of a leaf-level index block, this may throw:
> {quote}
> 2016-10-01 12:27:55,186 ERROR [MemStoreFlusher.0] 
> regionserver.MemStoreFlusher: Cache flusher failed for entry [flush region 
> pora_6_item_feature,0061:,1473838922457.12617bc4ebbfd171018bf96ac9bdd2a7.]
> java.lang.ArrayIndexOutOfBoundsException
> at 
> org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray(ByteBufferUtils.java:936)
> at 
> org.apache.hadoop.hbase.nio.SingleByteBuff.toBytes(SingleByteBuff.java:303)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.midkey(HFileBlockIndex.java:419)
> at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.midkey(HFileReaderImpl.java:1519)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile$Reader.midkey(StoreFile.java:1520)
> at 
> org.apache.hadoop.hbase.regionserver.StoreFile.getFileSplitPoint(StoreFile.java:706)
> at 
> org.apache.hadoop.hbase.regionserver.DefaultStoreFileManager.getSplitPoint(DefaultStoreFileManager.java:126)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.getSplitPoint(HStore.java:1983)
> at 
> org.apache.hadoop.hbase.regionserver.ConstantFamilySizeRegionSplitPolicy.getSplitPoint(ConstantFamilySizeRegionSplitPolicy.java:77)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkSplit(HRegion.java:7756)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:513)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:471)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher.access$800(MemStoreFlusher.java:75)
> at 
> org.apache.hadoop.hbase.regionserver.MemStoreFlusher$FlushHandler.run(MemStoreFlusher.java:259)
> at java.lang.Thread.run(Thread.java:756)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17070) backport HBASE-17020 to 1.3.1

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934857#comment-15934857
 ] 

Hudson commented on HBASE-17070:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #131 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/131/])
HBASE-17070 backport HBASE-17020 (keylen in midkey() dont computed (liyu: rev 
a60792425a50de48d6af88ff2737b5e32413de8a)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileWriterV2.java


> backport HBASE-17020 to 1.3.1
> -
>
> Key: HBASE-17070
> URL: https://issues.apache.org/jira/browse/HBASE-17070
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.3.1
>
> Attachments: HBASE-17070.branch-1.3.patch, 
> HBASE-17070.branch-1.3.patch, HBASE-17070.branch-1.3.patch
>
>
> As titled, backport HBASE-17020 after 1.3.0 got released.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17655) Removing MemStoreScanner and SnapshotScanner

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934772#comment-15934772
 ] 

Hudson commented on HBASE-17655:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2713 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2713/])
HBASE-17655 Removing MemStoreScanner and SnapshotScanner (eshcar: rev 
8f4ae0a0dcb658c4fe669bc4cdc68ad8e6219daf)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/NoOpScanPolicyObserver.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/Segment.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RegionCoprocessorHost.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StripeStoreFlusher.java
* (edit) 
hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreMergerSegmentsIterator.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactingToCellArrayMapMemStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/AbstractMemStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompactingMemStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultStoreFlusher.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactingMemStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/DefaultMobStoreFlusher.java
* (delete) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SnapshotScanner.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/SimpleRegionObserver.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFlusher.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ImmutableSegment.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSnapshot.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestRegionObserverScannerOpenHook.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestCoprocessorScanPolicy.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/CompositeImmutableSegment.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreSegmentsIterator.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestDefaultMemStore.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestMemStoreChunkPool.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/SegmentScanner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/DefaultMemStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreCompactor.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreCompactorSegmentsIterator.java
* (delete) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MemStoreScanner.java


> Removing MemStoreScanner and SnapshotScanner
> 
>
> Key: HBASE-17655
> URL: https://issues.apache.org/jira/browse/HBASE-17655
> Project: HBase
>  Issue Type: Improvement
>  Components: Scanners
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-17655-V01.patch, HBASE-17655-V02.patch, 
> HBASE-17655-V03.patch, HBASE-17655-V04.patch, HBASE-17655-V05.patch, 
> HBASE-17655-V05.patch, HBASE-17655-V06.patch, HBASE-17655-V07.patch, 
> HBASE-17655-V08.patch
>
>
> With CompactingMemstore becoming the new default, a store comprises multiple 
> memory segments and not just 1-2. MemStoreScanner encapsulates the scanning 
> of segments in the memory part of the store. SnapshotScanner is used to scan 
> the snapshot segment upon flush to disk.
> Having the logic of scanners scattered in multiple classes (StoreScanner, 
> SegmentScanner, MemStoreScanner, SnapshotScanner) makes maintainance and 
> debugging challenging tasks, not always for a good reason.
> For example, MemStoreScanner has a KeyValueHeap (KVH). When creating the 
> store scanner which also has a KVH, this makes a KVH inside a KVH. Reasoning 
> about the correctness of the methods supported by the scanner (seek, next, 
> hasNext, peek, etc.) is hard and debugging  them is cumbersome. 
> In addition, by removing the MemStoreScanner layer we allow store scanner to 
> 

[jira] [Commented] (HBASE-17060) backport HBASE-16570 to 1.3.1

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934695#comment-15934695
 ] 

Hudson commented on HBASE-17060:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #140 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/140/])
HBASE-17060 backport HBASE-16570 (Compute region locality in parallel at (liyu: 
rev 693b51d81af0c446b305af69fe130faee07581a6)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/RegionLocationFinder.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestRegionLocationFinder.java


> backport HBASE-16570 to 1.3.1
> -
>
> Key: HBASE-17060
> URL: https://issues.apache.org/jira/browse/HBASE-17060
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 1.3.0
>Reporter: Yu Li
>Assignee: binlijin
> Fix For: 1.3.1
>
> Attachments: HBASE-17060.branch-1.3.v1.patch, 
> HBASE-17060.branch-1.3.v1.patch, HBASE-17060.branch-1.3.v1.patch
>
>
> Need some backport after 1.3.0 got released



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16570) Compute region locality in parallel at startup

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934696#comment-15934696
 ] 

Hudson commented on HBASE-16570:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #140 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/140/])
HBASE-17060 backport HBASE-16570 (Compute region locality in parallel at (liyu: 
rev 693b51d81af0c446b305af69fe130faee07581a6)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/BaseLoadBalancer.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/RegionLocationFinder.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestRegionLocationFinder.java


> Compute region locality in parallel at startup
> --
>
> Key: HBASE-16570
> URL: https://issues.apache.org/jira/browse/HBASE-16570
> Project: HBase
>  Issue Type: Sub-task
>Reporter: binlijin
>Assignee: binlijin
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16570_addnum.patch, HBASE-16570_addnum_v2.patch, 
> HBASE-16570_addnum_v3.patch, HBASE-16570_addnum_v4.patch, 
> HBASE-16570_addnum_v5.patch, HBASE-16570_addnum_v6.patch, 
> HBASE-16570_addnum_v7.patch, HBASE-16570.branch-1.3-addendum.patch, 
> HBASE-16570-master_V1.patch, HBASE-16570-master_V2.patch, 
> HBASE-16570-master_V3.patch, HBASE-16570-master_V4.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17039) SimpleLoadBalancer schedules large amount of invalid region moves

2017-03-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934170#comment-15934170
 ] 

Hudson commented on HBASE-17039:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #11 (See 
[https://builds.apache.org/job/HBase-1.3-IT/11/])
HBASE-17059 backport HBASE-17039 (SimpleLoadBalancer schedules large (liyu: rev 
446a21fedd1282c15939eb4c46d13c859beedd7a)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/SimpleLoadBalancer.java


> SimpleLoadBalancer schedules large amount of invalid region moves
> -
>
> Key: HBASE-17039
> URL: https://issues.apache.org/jira/browse/HBASE-17039
> Project: HBase
>  Issue Type: Bug
>  Components: Balancer
>Affects Versions: 2.0.0, 1.3.0, 1.1.7, 1.2.4
>Reporter: Charlie Qiangeng Xu
>Assignee: Charlie Qiangeng Xu
> Fix For: 2.0.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: HBASE-17039.patch
>
>
> After increasing one of our clusters to 1600 nodes, we observed a large 
> amount of invalid region moves(more than 30k moves) fired by the balance 
> chore. Thus we simulated the problem and printed out the balance plan, only 
> to find out many servers that had two regions for a certain table(we use by 
> table strategy), sent out both regions to other two servers that have zero 
> region. 
> In the SimpleLoadBalancer's balanceCluster function,
> the code block that determines the underLoadedServers might have a problem:
> {code}
>   if (load >= min && load > 0) {
> continue; // look for other servers which haven't reached min
>   }
>   int regionsToPut = min - load;
>   if (regionsToPut == 0)
>   {
> regionsToPut = 1;
>   }
> {code}
> if min is zero, some server that has load of zero, which equals to min would 
> be marked as underloaded, which would cause the phenomenon mentioned above.
> Since we increased the cluster's size to 1600+, many tables that only have 
> 1000 regions, now would encounter such issue.
> By fixing it up, the balance plan went back to normal.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17059) backport HBASE-17039 to 1.3.1

2017-03-21 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-17059:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed into branch-1.3, resolving issue.

> backport HBASE-17039 to 1.3.1
> -
>
> Key: HBASE-17059
> URL: https://issues.apache.org/jira/browse/HBASE-17059
> Project: HBase
>  Issue Type: Bug
>  Components: Balancer
>Affects Versions: 1.3.0
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.3.1
>
> Attachments: HBASE-17059.branch-1.3.patch, 
> HBASE-17059.branch-1.3.patch
>
>
> Currently branch-1.3 codes are freezing for 1.3.0 release, need to backport 
> HBASE-17039 to 1.3.1 afterwards.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17669) Implement async mergeRegion/splitRegion methods.

2017-03-21 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17669:
-
Attachment: (was: HBASE-17669.v5.patch)

> Implement async mergeRegion/splitRegion methods.
> 
>
> Key: HBASE-17669
> URL: https://issues.apache.org/jira/browse/HBASE-17669
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Affects Versions: 2.0.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-17669.v1.patch, HBASE-17669.v2.patch, 
> HBASE-17669.v3.patch, HBASE-17669.v3.patch, HBASE-17669.v4.patch, 
> HBASE-17669.v5.patch
>
>
> RT



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17669) Implement async mergeRegion/splitRegion methods.

2017-03-21 Thread Zheng Hu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-17669:
-
Attachment: HBASE-17669.v5.patch

Trigger Hadoop QA again.

> Implement async mergeRegion/splitRegion methods.
> 
>
> Key: HBASE-17669
> URL: https://issues.apache.org/jira/browse/HBASE-17669
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin, asyncclient, Client
>Affects Versions: 2.0.0
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-17669.v1.patch, HBASE-17669.v2.patch, 
> HBASE-17669.v3.patch, HBASE-17669.v3.patch, HBASE-17669.v4.patch, 
> HBASE-17669.v5.patch, HBASE-17669.v5.patch
>
>
> RT



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17813) backport HBASE-16983 to branch-1.3

2017-03-21 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-17813:
--
Attachment: HBASE-17813.branch-1.3.patch

> backport HBASE-16983 to branch-1.3
> --
>
> Key: HBASE-17813
> URL: https://issues.apache.org/jira/browse/HBASE-17813
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Yu Li
>Assignee: Yu Li
>  Labels: unit-test
> Fix For: 1.3.1
>
> Attachments: HBASE-17813.branch-1.3.patch, 
> HBASE-17813.branch-1.3.patch
>
>
> From [recent UT 
> report|https://builds.apache.org/job/PreCommit-HBASE-Build/6170/testReport/] 
> of branch-1.3, we could see the same issue "Unable to create region 
> directory..." as described by HBASE-16983, so we should backport the JIRA to 
> fix this intermittent failure and avoid it blocking new commits.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17070) backport HBASE-17020 to 1.3.1

2017-03-21 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934148#comment-15934148
 ] 

Yu Li commented on HBASE-17070:
---

UT looks good, will commit soon if no objections.

> backport HBASE-17020 to 1.3.1
> -
>
> Key: HBASE-17070
> URL: https://issues.apache.org/jira/browse/HBASE-17070
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.0
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 1.3.1
>
> Attachments: HBASE-17070.branch-1.3.patch, 
> HBASE-17070.branch-1.3.patch, HBASE-17070.branch-1.3.patch
>
>
> As titled, backport HBASE-17020 after 1.3.0 got released.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17739) BucketCache is inefficient/wasteful/dumb in its bucket allocations

2017-03-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15934137#comment-15934137
 ] 

ramkrishna.s.vasudevan commented on HBASE-17739:


bq.The implications are two-fold: decreased SSD life span due to huge write 
amplification and decreased write performance when SSD is close to full
I can check on this more. Currently doing some work related to this SSD 
performance. Will be back on it.

> BucketCache is inefficient/wasteful/dumb in its bucket allocations
> --
>
> Key: HBASE-17739
> URL: https://issues.apache.org/jira/browse/HBASE-17739
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: stack
>
> By default we allocate 14 buckets with sizes from 5K to 513K. If lots of heap 
> given over to bucketcache and say no allocattions made for a particular 
> bucket size, this means we have a bunch of the bucketcache that just goes 
> idle/unused.
> For example, say heap is 100G. We'll divide it up among the sizes. If say we 
> only ever do 5k records, then most of the cache will go unused while the 
> allocation for 5k objects will see churn.
> Here is an old note of [~anoop.hbase]'s' from a conversation on bucket cache 
> we had offlist that describes the issue:
> "By default we have those 14 buckets with size range of 5K to 513K.
>   All sizes will have one bucket (with size 513*4) each except the
> last size.. ie. 513K sized many buckets will be there.  If we keep on
> writing only same sized blocks, we may loose all in btw sized buckets.
> Say we write only 4K sized blocks. We will 1st fill the bucket in 5K
> size. There is only one such bucket. Once this is filled, we will try
> to grab a complete free bucket from other sizes..  But we can not take
> it from 9K... 385K sized ones as there is only ONE bucket for these
> sizes.  We will take only from 513 size.. There are many in that...
> So we will eventually take all the buckets from 513 except the last
> one.. Ya it has to keep at least one in evey size.. So we will
> loose these much size.. They are of no use."
> We should set the size type on the fly as the records come in.
> Or better, we should choose record size on the fly. Here is another comment 
> from [~anoop.hbase]:
> "The second is the biggest contributor.  Suppose instead of 4K
> sized blocks, the user has 2 K sized blocks..  When we write a block to 
> bucket slot, we will reserve size equal to the allocated size for that block.
> So when we write 2K sized blocks (may be actual size a bit more than
> 2K ) we will take 5K with each of the block.  So u can see that we are
> loosing ~3K with every block. Means we are loosing more than half."
> He goes on: "If am 100% sure that all my table having 2K HFile block size, I 
> need to give this config a value 3 * 1024 (Exact 2 K if I give there may be
> again problem! That is another story we need to see how we can give
> more guarantee for the block size restriction HBASE-15248)..  So here also 
> ~1K loose for every 2K.. So some thing like a 30% loose !!! :-(“"
> So, we should figure the record sizes ourselves on the fly.
> Anything less has us wasting loads of cache space, nvm inefficiences lost 
> because of how we serialize base types to cache.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)