[jira] [Commented] (HBASE-17817) Make Regionservers log which tables it removed coprocessors from when aborting

2017-03-27 Thread Steen Manniche (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942756#comment-15942756
 ] 

Steen Manniche commented on HBASE-17817:


Got access to the log files again so I can give a bit more detail on the issue.

The hbase exception log message originates from here: 
https://github.com/apache/hbase/blob/ee1549cc9778af7124e3c7c6b187a0b124385a90/hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/CoprocessorHost.java#L604

In this method, we have access to 
[{{CoprocessorEnvironment}}|https://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/CoprocessorEnvironment.html],
 but I'm not sure if we can reliably extract the tablename from that?

> Make Regionservers log which tables it removed coprocessors from when aborting
> --
>
> Key: HBASE-17817
> URL: https://issues.apache.org/jira/browse/HBASE-17817
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, regionserver
>Affects Versions: 1.1.2
>Reporter: Steen Manniche
>  Labels: logging
>
> When a coprocessor throws a runtime exception (e.g. NPE), the regionserver 
> handles this according to {{hbase.coprocessor.abortonerror}}.
> If the coprocessor was loaded on a specific table, the output in the logs 
> give no indication as to which table the coprocessor was removed from (or 
> which version, or jarfile is the culprit). This causes longer debugging and 
> recovery times.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17633) Update unflushed sequence id in SequenceIdAccounting after flush with the minimum sequence id in memstore

2017-03-27 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17633:
--
Attachment: HBASE-17633.patch

An initial patch.

Let's see the precommit result.

> Update unflushed sequence id in SequenceIdAccounting after flush with the 
> minimum sequence id in memstore
> -
>
> Key: HBASE-17633
> URL: https://issues.apache.org/jira/browse/HBASE-17633
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17633.patch
>
>
> Now the tracking work is done by SequenceIdAccounting. And it is a little 
> tricky when dealing with flush. We should remove the mapping for the given 
> stores of a region from lowestUnflushedSequenceIds, so that we have space to 
> store the new lowest unflushed sequence id after flush. But we still need to 
> keep the old sequence ids in another map as we still need to use these values 
> when reporting to master to prevent data loss(think of the scenario that we 
> report the new lowest unflushed sequence id to master and we crashed before 
> actually flushed the data to disk).
> And when reviewing HBASE-17407, I found  that for CompactingMemStore, we have 
> to record the minimum sequence id.in memstore. We could just update the 
> mappings in SequenceIdAccounting using these values after flush. This means 
> we do not need to update the lowest unflushed sequence id in 
> SequenceIdAccounting, and also do not need to make space for the new lowest 
> unflushed when startCacheFlush, and also do not need the extra map to store 
> the old mappings.
> This could simplify our logic a lot. But this is a fundamental change so I 
> need sometime to implement, especially for modifying tests... And I also need 
> sometime to check if I miss something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17633) Update unflushed sequence id in SequenceIdAccounting after flush with the minimum sequence id in memstore

2017-03-27 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17633:
--
Assignee: Duo Zhang
  Status: Patch Available  (was: Open)

> Update unflushed sequence id in SequenceIdAccounting after flush with the 
> minimum sequence id in memstore
> -
>
> Key: HBASE-17633
> URL: https://issues.apache.org/jira/browse/HBASE-17633
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17633.patch
>
>
> Now the tracking work is done by SequenceIdAccounting. And it is a little 
> tricky when dealing with flush. We should remove the mapping for the given 
> stores of a region from lowestUnflushedSequenceIds, so that we have space to 
> store the new lowest unflushed sequence id after flush. But we still need to 
> keep the old sequence ids in another map as we still need to use these values 
> when reporting to master to prevent data loss(think of the scenario that we 
> report the new lowest unflushed sequence id to master and we crashed before 
> actually flushed the data to disk).
> And when reviewing HBASE-17407, I found  that for CompactingMemStore, we have 
> to record the minimum sequence id.in memstore. We could just update the 
> mappings in SequenceIdAccounting using these values after flush. This means 
> we do not need to update the lowest unflushed sequence id in 
> SequenceIdAccounting, and also do not need to make space for the new lowest 
> unflushed when startCacheFlush, and also do not need the extra map to store 
> the old mappings.
> This could simplify our logic a lot. But this is a fundamental change so I 
> need sometime to implement, especially for modifying tests... And I also need 
> sometime to check if I miss something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17765) Reviving the merge possibility in the CompactingMemStore

2017-03-27 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-17765:

Attachment: HBASE-17765-V05.patch

> Reviving the merge possibility in the CompactingMemStore
> 
>
> Key: HBASE-17765
> URL: https://issues.apache.org/jira/browse/HBASE-17765
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Fix For: 2.0.0
>
> Attachments: HBASE-17765-V01.patch, HBASE-17765-V02.patch, 
> HBASE-17765-V03.patch, HBASE-17765-V04.patch, HBASE-17765-V05.patch
>
>
> According to the new performance results presented in the HBASE-16417 we see 
> that the read latency of the 90th percentile of the BASIC policy is too big 
> due to the need to traverse through too many segments in the pipeline. In 
> this JIRA we correct the bug in the merge sizing calculations and allow 
> pipeline size threshold to be a configurable parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17817) Make Regionservers log which tables it removed coprocessors from when aborting

2017-03-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942789#comment-15942789
 ] 

Anoop Sam John commented on HBASE-17817:


Not always.  When the Exception is from any CP other than the RegionObserver, 
it is not really tied with a Region (And so table).  It might be the Master 
CPEnvironment or WAL or RS.   When it is RegionCoprocessorEnvironment, u can 
get the table name.

> Make Regionservers log which tables it removed coprocessors from when aborting
> --
>
> Key: HBASE-17817
> URL: https://issues.apache.org/jira/browse/HBASE-17817
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, regionserver
>Affects Versions: 1.1.2
>Reporter: Steen Manniche
>  Labels: logging
>
> When a coprocessor throws a runtime exception (e.g. NPE), the regionserver 
> handles this according to {{hbase.coprocessor.abortonerror}}.
> If the coprocessor was loaded on a specific table, the output in the logs 
> give no indication as to which table the coprocessor was removed from (or 
> which version, or jarfile is the culprit). This causes longer debugging and 
> recovery times.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17835) Spelling mistakes in the Java source

2017-03-27 Thread Qilin Cao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qilin Cao updated HBASE-17835:
--
Attachment: (was: HBASE-17835-001.patch)

> Spelling mistakes in the Java source
> 
>
> Key: HBASE-17835
> URL: https://issues.apache.org/jira/browse/HBASE-17835
> Project: HBase
>  Issue Type: Improvement
>Reporter: Qilin Cao
>Priority: Trivial
>
> I found spelling mistakes in the hbase java source files viz. recieved 
> instead of received, and SpanReciever instead of SpanReceiver



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17835) Spelling mistakes in the Java source

2017-03-27 Thread Qilin Cao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qilin Cao updated HBASE-17835:
--
Attachment: HBASE-17835-001.patch

> Spelling mistakes in the Java source
> 
>
> Key: HBASE-17835
> URL: https://issues.apache.org/jira/browse/HBASE-17835
> Project: HBase
>  Issue Type: Improvement
>Reporter: Qilin Cao
>Priority: Trivial
> Attachments: HBASE-17835-001.patch
>
>
> I found spelling mistakes in the hbase java source files viz. recieved 
> instead of received, and SpanReciever instead of SpanReceiver



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17837) Backport HBASE-15314 to branch-1.3

2017-03-27 Thread ramkrishna.s.vasudevan (JIRA)
ramkrishna.s.vasudevan created HBASE-17837:
--

 Summary: Backport HBASE-15314 to branch-1.3
 Key: HBASE-17837
 URL: https://issues.apache.org/jira/browse/HBASE-17837
 Project: HBase
  Issue Type: Improvement
  Components: BucketCache
Affects Versions: 1.3.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 1.3.1


Backport of HBASE-15314.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17837) Backport HBASE-15314 to branch-1.3

2017-03-27 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-17837:
---
Status: Patch Available  (was: Open)

> Backport HBASE-15314 to branch-1.3
> --
>
> Key: HBASE-17837
> URL: https://issues.apache.org/jira/browse/HBASE-17837
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache
>Affects Versions: 1.3.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 1.3.1
>
> Attachments: HBASE-15314-branch-1.3.patch
>
>
> Backport of HBASE-15314.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17633) Update unflushed sequence id in SequenceIdAccounting after flush with the minimum sequence id in memstore

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942841#comment-15942841
 ] 

Hadoop QA commented on HBASE-17633:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
1s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 29s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 2s 
{color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 50s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 23m 2s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 68m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  
org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(WAL, 
long, Collection, MonitoredTask, boolean) does not release lock on all 
exception paths  At HRegion.java:does not release lock on all exception paths  
At HRegion.java:[line 2372] |
| Failed junit tests | hadoop.hbase.io.TestHeapSize |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860605/HBASE-17633.patch |
| JIRA Issue | HBASE-17633 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 22f18a404da0 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 4a076cd |
| Default

[jira] [Updated] (HBASE-17837) Backport HBASE-15314 to branch-1.3

2017-03-27 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-17837:
---
Attachment: HBASE-15314-branch-1.3.patch

Patch for branch-1.3.
Ping [~zjushch], [~zyork], [~anoopsamjohn].

> Backport HBASE-15314 to branch-1.3
> --
>
> Key: HBASE-17837
> URL: https://issues.apache.org/jira/browse/HBASE-17837
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache
>Affects Versions: 1.3.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 1.3.1
>
> Attachments: HBASE-15314-branch-1.3.patch
>
>
> Backport of HBASE-15314.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17837) Backport HBASE-15314 to branch-1.3

2017-03-27 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-17837:
---
Fix Version/s: 1.4.0

> Backport HBASE-15314 to branch-1.3
> --
>
> Key: HBASE-17837
> URL: https://issues.apache.org/jira/browse/HBASE-17837
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache
>Affects Versions: 1.3.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 1.4.0, 1.3.1
>
> Attachments: HBASE-15314-branch-1.3.patch
>
>
> Backport of HBASE-15314.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17837) Backport HBASE-15314 to branch-1.3

2017-03-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942860#comment-15942860
 ] 

Anoop Sam John commented on HBASE-17837:


+1

> Backport HBASE-15314 to branch-1.3
> --
>
> Key: HBASE-17837
> URL: https://issues.apache.org/jira/browse/HBASE-17837
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache
>Affects Versions: 1.3.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 1.4.0, 1.3.1
>
> Attachments: HBASE-15314-branch-1.3.patch
>
>
> Backport of HBASE-15314.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942878#comment-15942878
 ] 

Anastasia Braginsky commented on HBASE-16438:
-

OK, I understand what you are saying. Then how about using Java's Soft 
References in the ChunkCreator?

https://docs.oracle.com/javase/7/docs/api/java/lang/ref/SoftReference.html

In short, Soft References are used in Java to implement caches and objects 
pointed by soft references can be GCed if no other (hard reference) is pointing 
to this object. 

This sounds to me as good (and again simple) solution. If any cell is not 
reachable from any CSLM/CellArrayMap/CellChunkMap then we are not going to read 
its chunkID and to ask for translation, so we do not care that its reference is 
cleared from the ChunkCreator's map. Just need to be careful about those null 
pointers in the ChunkCreator's Map. What do you think?

bq. I mean how to pass the info whether the CSLM has to be converted to 
CellArrayMap or CellChunkMap.

I plan to have it user-configured as part of MemStore definition (at least as a 
first step). I mean once created, some CompactingMemStore is planed to work 
with CellArayMap and another CompactingMemStore is created to work with 
CellChunkMap (mostly for the off-heap). But this is of course can be changed.



> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17837) Backport HBASE-15314 to branch-1.3

2017-03-27 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942894#comment-15942894
 ] 

Zach York commented on HBASE-17837:
---

Does this also apply to branch-1?

> Backport HBASE-15314 to branch-1.3
> --
>
> Key: HBASE-17837
> URL: https://issues.apache.org/jira/browse/HBASE-17837
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache
>Affects Versions: 1.3.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 1.4.0, 1.3.1
>
> Attachments: HBASE-15314-branch-1.3.patch
>
>
> Backport of HBASE-15314.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942903#comment-15942903
 ] 

Anoop Sam John commented on HBASE-16438:


I dont think Soft ref can be used.  Once the CSLM is converted to CellChunkMap, 
we dont have any hard refs to Chunks from any where (Cells are gone).  Now the 
ChunkMap contain chunkId and offset , length info.  Now assume a GC picked and 
removed these chunks (Ya there are no hard refs to it), and after a read comes 
and we want to read back that Cell, what to do? 

Ok if u also think CellChunkMap should be better used by off heap usage, can we 
say CellChunkMap to be used along with off heap MSLAB pool usage only?  In case 
of off heap MSLAB all chunks will get pooled. There wont be any on demand chunk 
creation.  So we will keep this  id vs chunk info in map iff CellChunkMap is in 
place and so off heap MSLAB pool in place? 
Any way the above mentioned use case ideally should not use MSLAB pool at all.

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942907#comment-15942907
 ] 

ramkrishna.s.vasudevan commented on HBASE-16438:


Sorry missed these latest questions/comments.
bq. When CellChunkMap is not in use, may be we dont need to keep this id vs 
chunk map at all.. How abt we enable this feature iff MSLAB pool is in place? 
Just asking
I was infact first thinking we could do this CellChunkMap only if there is pool 
enabled so that there is no need for all this chunkCreation and maintenance 
overhead. But later as per our discussion it evolved that chunk creations is 
there when ever MSLAB is on.

bq.In short, Soft References are used in Java to implement caches and objects 
pointed by soft references can be GCed if no other (hard reference) is pointing 
to this object.
So the Chunk that is put in the map will have a soft reference? 

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942943#comment-15942943
 ] 

Anastasia Braginsky commented on HBASE-16438:
-

bq. I dont think Soft ref can be used. Once the CSLM is converted to 
CellChunkMap, we dont have any hard refs to Chunks from any where (Cells are 
gone). Now the ChunkMap contain chunkId and offset , length info. Now assume a 
GC picked and removed these chunks (Ya there are no hard refs to it), and after 
a read comes and we want to read back that Cell, what to do? 

We can "harden" the references in the map in the process of transferring to 
CellChunkMap (flattening). The chunks that can not be reached will not be a 
part of this process and their IDs won't be found so their references are not 
going to be "harden", which is OK.

bq. So the Chunk that is put in the map will have a soft reference?

Yes. The chunk that is used for CSLM or CellArrayMap is going to have soft 
reference the chunk that is used for CellChunkMap is going to have hard 
reference.

bq. Can we say CellChunkMap to be used along with off heap MSLAB pool usage 
only?

This is the first question to answer. I think we should not limit the 
CellChunkMap to off-heap right now. As we have seen with CompactingMemStore 
only when you implement it all and make a good performance testing you can see 
what is benefit. As now we see the merge is a good thing to do, although in the 
past we already thought to remove this code. So if we implement CellChunkMap 
now without any possibility to use it on-heap, we are limiting ourselves 
without any experimental evidence. I believe we should write the code generally 
enough.

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942980#comment-15942980
 ] 

ramkrishna.s.vasudevan commented on HBASE-16438:


bq.We can "harden" the references in the map in the process of transferring to 
CellChunkMap (flattening).
I agree to Anoop's point here. But am not sure how you mean this hardening it 
again. So considering the fact that if the ChunkID map was having soft ref to 
the Chunks and we added the cells from these chunks to CSLM as in the above use 
case of duplicate cells if the cells are removed from the CSLM and remove the 
ref to these chunks they can be GCed. So they are soft references here. 
So as per your idea, we create a CellChunkMap from these items in CSLM and that 
time we convert those chunk reference to harden references. How can that be 
done? The chunkId map will have a  signature. So while 
converting to CellChunkMap every soft ref of this chunk should now be converted 
to a direct reference. Sorry I get your idea but not sure on the impl thing 
that you are suggesting here.

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17765) Reviving the merge possibility in the CompactingMemStore

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15942985#comment-15942985
 ] 

Hadoop QA commented on HBASE-17765:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 6s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 99m 13s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 138m 48s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860608/HBASE-17765-V05.patch 
|
| JIRA Issue | HBASE-17765 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 000e8f1a3899 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 4a076cd |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6226/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6226/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Reviving the merge possibility in the CompactingMemStore
> 
>
> Key: HBASE-17765
> URL: https://issues.apache.org/jira/browse/HBASE-17765
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Fix For: 2.0.0
>
> Attachments: HBASE-17765-V01.patch, HBASE-17765-V02.patch

[jira] [Updated] (HBASE-17633) Update unflushed sequence id in SequenceIdAccounting after flush with the minimum sequence id in memstore

2017-03-27 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17633:
--
Attachment: HBASE-17633-v1.patch

Fix TestHeapSize.

The findbugs warning is very strange. I haven't modified the related code. Let 
me dig more.

Thanks.

> Update unflushed sequence id in SequenceIdAccounting after flush with the 
> minimum sequence id in memstore
> -
>
> Key: HBASE-17633
> URL: https://issues.apache.org/jira/browse/HBASE-17633
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17633.patch, HBASE-17633-v1.patch
>
>
> Now the tracking work is done by SequenceIdAccounting. And it is a little 
> tricky when dealing with flush. We should remove the mapping for the given 
> stores of a region from lowestUnflushedSequenceIds, so that we have space to 
> store the new lowest unflushed sequence id after flush. But we still need to 
> keep the old sequence ids in another map as we still need to use these values 
> when reporting to master to prevent data loss(think of the scenario that we 
> report the new lowest unflushed sequence id to master and we crashed before 
> actually flushed the data to disk).
> And when reviewing HBASE-17407, I found  that for CompactingMemStore, we have 
> to record the minimum sequence id.in memstore. We could just update the 
> mappings in SequenceIdAccounting using these values after flush. This means 
> we do not need to update the lowest unflushed sequence id in 
> SequenceIdAccounting, and also do not need to make space for the new lowest 
> unflushed when startCacheFlush, and also do not need the extra map to store 
> the old mappings.
> This could simplify our logic a lot. But this is a fundamental change so I 
> need sometime to implement, especially for modifying tests... And I also need 
> sometime to check if I miss something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17837) Backport HBASE-15314 to branch-1.3

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943026#comment-15943026
 ] 

Hadoop QA commented on HBASE-17837:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
11s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} branch-1.3 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} branch-1.3 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
58s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s 
{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} branch-1.3 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} branch-1.3 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 42s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 83m 7s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 116m 53s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:66fbe99 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860618/HBASE-15314-branch-1.3.patch
 |
| JIRA Issue | HBASE-17837 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 441c30806036 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/hbase.sh |
| git revision | branch-1.3 / ab335bf |
| Default Java | 1.7.0_80 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8

[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943037#comment-15943037
 ] 

Anoop Sam John commented on HBASE-16438:


I can think of some way. Not sure whether it is crazy or not.
We have the id vs chunk Mapping globally in ChunkCreator.   Also we will have 
chunk ids tracked in MSLABImpl which this mslab deals with.  Now when a segment 
is getting converted from CSLM based into CellChunkMap based,  can we just 
track the chunkIds actually getting used? (Which cells getting moved into 
ChunkMap). All the remaining chunks seems of no use and can be immediate 
removed from the map.   Still while the segment is active those chunks can not 
get GCed, which is not the case right now..  But once the in memory flush 
happens, there can get GCed.   Just throwing out some thoughts.

Any way as said, there is no possibility any OOME which was fixed by Yu Li.   
Only thing is better GC possibility we are lossing.

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943042#comment-15943042
 ] 

ramkrishna.s.vasudevan commented on HBASE-16438:


Pls see the comment on the idea  how to track the chunk id that is not really 
getting used. 
Do you feel that can work?  I have implemented the same in a patch but not 
posted it as I need to see if there is really a perf penalty.

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17788) Procedure V2 performance improvements

2017-03-27 Thread Janos Gub (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janos Gub updated HBASE-17788:
--
Issue Type: Improvement  (was: Sub-task)
Parent: (was: HBASE-12439)

> Procedure V2 performance improvements
> -
>
> Key: HBASE-17788
> URL: https://issues.apache.org/jira/browse/HBASE-17788
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, proc-v2
>Reporter: stack
> Fix For: 2.0.0
>
>
> This is a list of items to work on to improve framework perf taken from 
> https://docs.google.com/document/d/1kEWzyA0iCyRjdogjju9JNMDT9ODHaKcyoAJTMXkDpxc/edit#
>  Make sub-issues to work on each in the list below:
>  * Replace fixed Executor Threads with dynamic thread pool (Easy)
>  * Connect the active thread count with the ProcStore slots (Medium)
>  * Parallelize the load (Complex)
>  * Allow Procedures to start early (Complex) 
> https://github.com/apache/hbase/blob/master/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFormatReader.java#L155



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17838) Replace fixed Executor Threads with dynamic thread pool

2017-03-27 Thread Janos Gub (JIRA)
Janos Gub created HBASE-17838:
-

 Summary: Replace fixed Executor Threads with dynamic thread pool 
 Key: HBASE-17838
 URL: https://issues.apache.org/jira/browse/HBASE-17838
 Project: HBase
  Issue Type: Sub-task
Reporter: Janos Gub
Assignee: Janos Gub






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17788) Procedure V2 performance improvements

2017-03-27 Thread Janos Gub (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943046#comment-15943046
 ] 

Janos Gub commented on HBASE-17788:
---

Converted to issue to be able to create subtasks. Issue 
https://issues.apache.org/jira/browse/HBASE-12439 is the parent of this one.

> Procedure V2 performance improvements
> -
>
> Key: HBASE-17788
> URL: https://issues.apache.org/jira/browse/HBASE-17788
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance, proc-v2
>Reporter: stack
> Fix For: 2.0.0
>
>
> This is a list of items to work on to improve framework perf taken from 
> https://docs.google.com/document/d/1kEWzyA0iCyRjdogjju9JNMDT9ODHaKcyoAJTMXkDpxc/edit#
>  Make sub-issues to work on each in the list below:
>  * Replace fixed Executor Threads with dynamic thread pool (Easy)
>  * Connect the active thread count with the ProcStore slots (Medium)
>  * Parallelize the load (Complex)
>  * Allow Procedures to start early (Complex) 
> https://github.com/apache/hbase/blob/master/hbase-procedure/src/main/java/org/apache/hadoop/hbase/procedure2/store/wal/ProcedureWALFormatReader.java#L155



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943054#comment-15943054
 ] 

Anoop Sam John commented on HBASE-16438:


Ya may be with every cell keeping the ref counted will have a -ve impact.  Need 
to test and prove any way.  We might not need a strict way. As much as 
possible, as early as possible,  allow the chunks to be GCed.

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17817) Make Regionservers log which tables it removed coprocessors from when aborting

2017-03-27 Thread Steen Manniche (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943057#comment-15943057
 ] 

Steen Manniche commented on HBASE-17817:


Great, that was what I was hoping for: some mechanism to supply the most 
detailed information available at the time the exception is thrown. Thanks!

> Make Regionservers log which tables it removed coprocessors from when aborting
> --
>
> Key: HBASE-17817
> URL: https://issues.apache.org/jira/browse/HBASE-17817
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, regionserver
>Affects Versions: 1.1.2
>Reporter: Steen Manniche
>  Labels: logging
>
> When a coprocessor throws a runtime exception (e.g. NPE), the regionserver 
> handles this according to {{hbase.coprocessor.abortonerror}}.
> If the coprocessor was loaded on a specific table, the output in the logs 
> give no indication as to which table the coprocessor was removed from (or 
> which version, or jarfile is the culprit). This causes longer debugging and 
> recovery times.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17838) Replace fixed Executor Threads with dynamic thread pool

2017-03-27 Thread Janos Gub (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janos Gub updated HBASE-17838:
--
Attachment: initial.patch

I was trying to do some work on this one, so uploading the iniital patch to 
brainstorm about the direction.

I replaced the arraylist of Worker threads with a threadpoolexecutor, but 
several questions popped up:

- the default logic is somehow dynamic in the number of threads allocated, is 
there any advantage of replacing it other than having ThreadPoolExecutor as a 
common implementation?

- the default implementation uses a custom configurable logic to create new 
threads (definition of stuck threads, ratio of stuck threads etc). I think it 
would be quite hacky to put the same logic to a custom threadpoolexecutor

- in the current design, executer worker threads are polling the scheduler to 
gather tasks, what I tried to maintain in this patch, but I would feel more 
natural to make scheduler submit tasks to the executor. 

In sum: Is this a good direction at all?

> Replace fixed Executor Threads with dynamic thread pool 
> 
>
> Key: HBASE-17838
> URL: https://issues.apache.org/jira/browse/HBASE-17838
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance, proc-v2
>Reporter: Janos Gub
>Assignee: Janos Gub
> Fix For: 2.0.0
>
> Attachments: initial.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HBASE-17700) Release 1.2.5

2017-03-27 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey resolved HBASE-17700.
-
Resolution: Fixed

Announcement email sent to user@hbase, announce@apache, and dev@hbase.

> Release 1.2.5
> -
>
> Key: HBASE-17700
> URL: https://issues.apache.org/jira/browse/HBASE-17700
> Project: HBase
>  Issue Type: Task
>  Components: community
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.2.5
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17339) Scan-Memory-First Optimization for Get Operations

2017-03-27 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943102#comment-15943102
 ] 

Eshcar Hillel commented on HBASE-17339:
---

Yes, sure I can run with your patch [~ben.manes] :)
I just wonder if you have any insight on wether or not tinyLFU *can* help 
before testing it.
So far memstore was considered the write cache and block cache the read cache. 
The optimization makes memstore a first tier read cache and block cache a 
second tier read cache. So with zipfian distribution the head of the 
distribution is found in memstore and the tail is searched in the block cache. 
With the current LRU cache we see the same number of eviction from the cache 
with and without the optimization.
Do you think tinyLFU can do a better job in managing the blocks with smarter 
admission-eviction so the hit rate is increased? Or since this is dealing with 
the "torso" and not the head of the distribution can't do better job?

> Scan-Memory-First Optimization for Get Operations
> -
>
> Key: HBASE-17339
> URL: https://issues.apache.org/jira/browse/HBASE-17339
> Project: HBase
>  Issue Type: Improvement
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-17339-V01.patch, HBASE-17339-V02.patch, 
> HBASE-17339-V03.patch, HBASE-17339-V03.patch, HBASE-17339-V04.patch, 
> HBASE-17339-V05.patch, HBASE-17339-V06.patch, read-latency-mixed-workload.jpg
>
>
> The current implementation of a get operation (to retrieve values for a 
> specific key) scans through all relevant stores of the region; for each store 
> both memory components (memstores segments) and disk components (hfiles) are 
> scanned in parallel.
> We suggest to apply an optimization that speculatively scans memory-only 
> components first and only if the result is incomplete scans both memory and 
> disk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943108#comment-15943108
 ] 

Anastasia Braginsky commented on HBASE-16438:
-

bq. But am not sure how you mean this hardening it again. 

public class SoftReference extends Reference
So I think you can use just reference in the map type and later just to update 
the map with new mapping (delete old mapping from some chunkID to soft 
reference and add a new mapping from same chunkID to hard reference). If this 
doesn't work for some reason we may have two maps  and 
.

bq. Now when a segment is getting converted from CSLM based into CellChunkMap 
based, can we just track the chunkIds actually getting used? (Which cells 
getting moved into ChunkMap). All the remaining chunks seems of no use and can 
be immediate removed from the map.

Nice idea. I think the result of this idea is the same as combining soft and 
hard pointers in the Creator's Map. However, with combining soft and hard 
pointers in the Creator's Map the chunks can be GCed also in active segment.

bq. Do you feel that can work? I have implemented the same in a patch but not 
posted it as I need to see if there is really a perf penalty.

I saw this idea, I feel it can work, I think it is a bit complicated and may 
cost us some performance. I believe we can achieve the same for less. But if 
you want to go for it, I am OK with that.

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943112#comment-15943112
 ] 

Anoop Sam John commented on HBASE-16438:


No I got what u were saying.  Change the ref (harden) on the go   Ya agree. 
This is kind of the way I was saying.

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17633) Update unflushed sequence id in SequenceIdAccounting after flush with the minimum sequence id in memstore

2017-03-27 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943123#comment-15943123
 ] 

Eshcar Hillel commented on HBASE-17633:
---

[~Apache9] can you please post a link to the RB, thanks.

> Update unflushed sequence id in SequenceIdAccounting after flush with the 
> minimum sequence id in memstore
> -
>
> Key: HBASE-17633
> URL: https://issues.apache.org/jira/browse/HBASE-17633
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17633.patch, HBASE-17633-v1.patch
>
>
> Now the tracking work is done by SequenceIdAccounting. And it is a little 
> tricky when dealing with flush. We should remove the mapping for the given 
> stores of a region from lowestUnflushedSequenceIds, so that we have space to 
> store the new lowest unflushed sequence id after flush. But we still need to 
> keep the old sequence ids in another map as we still need to use these values 
> when reporting to master to prevent data loss(think of the scenario that we 
> report the new lowest unflushed sequence id to master and we crashed before 
> actually flushed the data to disk).
> And when reviewing HBASE-17407, I found  that for CompactingMemStore, we have 
> to record the minimum sequence id.in memstore. We could just update the 
> mappings in SequenceIdAccounting using these values after flush. This means 
> we do not need to update the lowest unflushed sequence id in 
> SequenceIdAccounting, and also do not need to make space for the new lowest 
> unflushed when startCacheFlush, and also do not need the extra map to store 
> the old mappings.
> This could simplify our logic a lot. But this is a fundamental change so I 
> need sometime to implement, especially for modifying tests... And I also need 
> sometime to check if I miss something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17339) Scan-Memory-First Optimization for Get Operations

2017-03-27 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943134#comment-15943134
 ] 

Edward Bortnikov commented on HBASE-17339:
--

Can't see how TinyLFU can do a better job with stationary distributions (in 
which item popularity does not change over time). I'd imagine it being good 
under bursty workloads. 

> Scan-Memory-First Optimization for Get Operations
> -
>
> Key: HBASE-17339
> URL: https://issues.apache.org/jira/browse/HBASE-17339
> Project: HBase
>  Issue Type: Improvement
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-17339-V01.patch, HBASE-17339-V02.patch, 
> HBASE-17339-V03.patch, HBASE-17339-V03.patch, HBASE-17339-V04.patch, 
> HBASE-17339-V05.patch, HBASE-17339-V06.patch, read-latency-mixed-workload.jpg
>
>
> The current implementation of a get operation (to retrieve values for a 
> specific key) scans through all relevant stores of the region; for each store 
> both memory components (memstores segments) and disk components (hfiles) are 
> scanned in parallel.
> We suggest to apply an optimization that speculatively scans memory-only 
> components first and only if the result is incomplete scans both memory and 
> disk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17831) Support small scan in thrift2

2017-03-27 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943138#comment-15943138
 ] 

Guangxu Cheng commented on HBASE-17831:
---

{quote}
For master branch, have you seen this ?
{code}
@Deprecated
public Scan setSmall(boolean small) {
{code}
{quote}

Sorry, I hadn't noticed.
Small scan should use pread and openScanner,next,closeScanner in one RPC call.
After HBASE-17508 and HBASE-17045, the implementation of small scan and regular 
scan is unified.
regular scan also has one rpc optimization and use pread by 
setReadType(ReadType).
For master branch, setReadType(ReadType) instead of setSmall(small) maybe more 
better, Right?

> Support small scan in thrift2
> -
>
> Key: HBASE-17831
> URL: https://issues.apache.org/jira/browse/HBASE-17831
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
> Attachments: HBASE-17831-branch-1.patch, HBASE-17831-master.patch
>
>
> Support small scan in thrift2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943144#comment-15943144
 ] 

ramkrishna.s.vasudevan commented on HBASE-16438:


bq. and .
Yes this type will work. 
bq.All the remaining chunks seems of no use and can be immediate removed from 
the map.
Yes this will work provided we hit the limit to convert to CellChunkMap. In the 
case that was presented here we may not actually grow to that size? Then in 
that case we will have the chunks getting referenced?
So if we have this Soft reference way then may it is much easier so when we 
reallly convert from CSLM to CellChunkMap that time we can convert the soft ref 
to hard ref by having another map in the ChunkCreator. Any soft ref will 
automatically get removed when the GC tries to clear them for a GC cycle. 
One doubt - the Soft ref will get cleared only when the OOME is hit?

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943148#comment-15943148
 ] 

Anastasia Braginsky commented on HBASE-16438:
-

bq. One doubt - the Soft ref will get cleared only when the OOME is hit?

I don't think so. This is what they say in the Oracle documentation:

All soft references to softly-reachable objects are guaranteed to have been 
cleared before the virtual machine throws an OutOfMemoryError. Otherwise no 
constraints are placed upon the time at which a soft reference will be cleared 
or the order in which a set of such references to different objects will be 
cleared. Virtual machine implementations are, however, encouraged to bias 
against clearing recently-created or recently-used soft references. 

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943152#comment-15943152
 ] 

Anoop Sam John commented on HBASE-16438:


Chance of getting GCed might be bit more when chunk is actually removed from 
Map (No refs to it at all) than still referring with one SoftRef.  Ya we are 
not sure when it will get collected by GC.

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17765) Reviving the merge possibility in the CompactingMemStore

2017-03-27 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943154#comment-15943154
 ] 

Anastasia Braginsky commented on HBASE-17765:
-

OK, all green and I have a +1...
So I think I am going to commit this as my first real commit :)

> Reviving the merge possibility in the CompactingMemStore
> 
>
> Key: HBASE-17765
> URL: https://issues.apache.org/jira/browse/HBASE-17765
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Fix For: 2.0.0
>
> Attachments: HBASE-17765-V01.patch, HBASE-17765-V02.patch, 
> HBASE-17765-V03.patch, HBASE-17765-V04.patch, HBASE-17765-V05.patch
>
>
> According to the new performance results presented in the HBASE-16417 we see 
> that the read latency of the 90th percentile of the BASIC policy is too big 
> due to the need to traverse through too many segments in the pipeline. In 
> this JIRA we correct the bug in the merge sizing calculations and allow 
> pipeline size threshold to be a configurable parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17765) Reviving the merge possibility in the CompactingMemStore

2017-03-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943159#comment-15943159
 ] 

Anoop Sam John commented on HBASE-17765:


Go for it. :-)

> Reviving the merge possibility in the CompactingMemStore
> 
>
> Key: HBASE-17765
> URL: https://issues.apache.org/jira/browse/HBASE-17765
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Fix For: 2.0.0
>
> Attachments: HBASE-17765-V01.patch, HBASE-17765-V02.patch, 
> HBASE-17765-V03.patch, HBASE-17765-V04.patch, HBASE-17765-V05.patch
>
>
> According to the new performance results presented in the HBASE-16417 we see 
> that the read latency of the 90th percentile of the BASIC policy is too big 
> due to the need to traverse through too many segments in the pipeline. In 
> this JIRA we correct the bug in the merge sizing calculations and allow 
> pipeline size threshold to be a configurable parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17831) Support small scan in thrift2

2017-03-27 Thread Guangxu Cheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng updated HBASE-17831:
--
Attachment: HBASE-17831-master-v1.patch

> Support small scan in thrift2
> -
>
> Key: HBASE-17831
> URL: https://issues.apache.org/jira/browse/HBASE-17831
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
> Attachments: HBASE-17831-branch-1.patch, HBASE-17831-master.patch, 
> HBASE-17831-master-v1.patch
>
>
> Support small scan in thrift2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943174#comment-15943174
 ] 

ramkrishna.s.vasudevan commented on HBASE-16438:


bq.Chance of getting GCed might be bit more when chunk is actually removed from 
Map (No refs to it at all) than still referring with one SoftRef. 
That is why I thought of going with the ref count way. May be it is heavy 
lifting but we can try the Soft ref way. So shall we do that also in this JIRA 
or should we just finish the patch in the current state and go with a new JIRA 
to solve the GC problem that we may face?

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17831) Support small scan in thrift2

2017-03-27 Thread Guangxu Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943176#comment-15943176
 ] 

Guangxu Cheng commented on HBASE-17831:
---

The v2 patch uses setReadType(ReadType)

> Support small scan in thrift2
> -
>
> Key: HBASE-17831
> URL: https://issues.apache.org/jira/browse/HBASE-17831
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
> Attachments: HBASE-17831-branch-1.patch, HBASE-17831-master.patch, 
> HBASE-17831-master-v1.patch
>
>
> Support small scan in thrift2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17765) Reviving the merge possibility in the CompactingMemStore

2017-03-27 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-17765:

  Resolution: Fixed
Release Note: Reviving the merge of the compacting pipeline: making the 
limit on the number of the segments in the pipeline configurable, adding merge 
test, fixing bug in sizes counting
  Status: Resolved  (was: Patch Available)

> Reviving the merge possibility in the CompactingMemStore
> 
>
> Key: HBASE-17765
> URL: https://issues.apache.org/jira/browse/HBASE-17765
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Fix For: 2.0.0
>
> Attachments: HBASE-17765-V01.patch, HBASE-17765-V02.patch, 
> HBASE-17765-V03.patch, HBASE-17765-V04.patch, HBASE-17765-V05.patch
>
>
> According to the new performance results presented in the HBASE-16417 we see 
> that the read latency of the 90th percentile of the BASIC policy is too big 
> due to the need to traverse through too many segments in the pipeline. In 
> this JIRA we correct the bug in the merge sizing calculations and allow 
> pipeline size threshold to be a configurable parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17765) Reviving the merge possibility in the CompactingMemStore

2017-03-27 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943181#comment-15943181
 ] 

Anastasia Braginsky commented on HBASE-17765:
-

Thank you! Hope I did all well :)

> Reviving the merge possibility in the CompactingMemStore
> 
>
> Key: HBASE-17765
> URL: https://issues.apache.org/jira/browse/HBASE-17765
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Fix For: 2.0.0
>
> Attachments: HBASE-17765-V01.patch, HBASE-17765-V02.patch, 
> HBASE-17765-V03.patch, HBASE-17765-V04.patch, HBASE-17765-V05.patch
>
>
> According to the new performance results presented in the HBASE-16417 we see 
> that the read latency of the 90th percentile of the BASIC policy is too big 
> due to the need to traverse through too many segments in the pipeline. In 
> this JIRA we correct the bug in the merge sizing calculations and allow 
> pipeline size threshold to be a configurable parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17837) Backport HBASE-15314 to branch-1.3

2017-03-27 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943185#comment-15943185
 ] 

ramkrishna.s.vasudevan commented on HBASE-17837:


I created based on branch-1.3. I think it should apply to branch-1 also. If not 
will prepare a new patch for that branch.

> Backport HBASE-15314 to branch-1.3
> --
>
> Key: HBASE-17837
> URL: https://issues.apache.org/jira/browse/HBASE-17837
> Project: HBase
>  Issue Type: Improvement
>  Components: BucketCache
>Affects Versions: 1.3.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 1.4.0, 1.3.1
>
> Attachments: HBASE-15314-branch-1.3.patch
>
>
> Backport of HBASE-15314.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17765) Reviving the merge possibility in the CompactingMemStore

2017-03-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943208#comment-15943208
 ] 

Anoop Sam John commented on HBASE-17765:


Release Note needs change.   U just need to say the new config added and 
default value for it if in.  And what is the use of the new config .. No need 
to mention abt sizing bug fix etc 

> Reviving the merge possibility in the CompactingMemStore
> 
>
> Key: HBASE-17765
> URL: https://issues.apache.org/jira/browse/HBASE-17765
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Fix For: 2.0.0
>
> Attachments: HBASE-17765-V01.patch, HBASE-17765-V02.patch, 
> HBASE-17765-V03.patch, HBASE-17765-V04.patch, HBASE-17765-V05.patch
>
>
> According to the new performance results presented in the HBASE-16417 we see 
> that the read latency of the 90th percentile of the BASIC policy is too big 
> due to the need to traverse through too many segments in the pipeline. In 
> this JIRA we correct the bug in the merge sizing calculations and allow 
> pipeline size threshold to be a configurable parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943210#comment-15943210
 ] 

Anoop Sam John commented on HBASE-16438:


New issue is ok. Ping [~carp84]..   Pls get his buy in as he fixed the other 
issue.

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17633) Update unflushed sequence id in SequenceIdAccounting after flush with the minimum sequence id in memstore

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943214#comment-15943214
 ] 

Hadoop QA commented on HBASE-17633:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 49s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 18s 
{color} | {color:red} hbase-server generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 56s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 101m 52s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
42s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 148m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  
org.apache.hadoop.hbase.regionserver.HRegion.internalPrepareFlushCache(WAL, 
long, Collection, MonitoredTask, boolean) does not release lock on all 
exception paths  At HRegion.java:does not release lock on all exception paths  
At HRegion.java:[line 2372] |
| Failed junit tests | hadoop.hbase.regionserver.TestSplitWalDataLoss |
|   | hadoop.hbase.regionserver.TestPerColumnFamilyFlush |
|   | hadoop.hbase.mapreduce.TestWALRecordReader |
|   | hadoop.hbase.wal.TestWALRootDir |
|   | hadoop.hbase.regionserver.TestWalAndCompactingMemStoreFlush |
|   | hadoop.hbase.wal.TestBoundedRegionGroupingStrategy |
|   | hadoop.hbase.regionserver.TestRecoveredEdits |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860636/HBASE-17633-v1.patch |
| JIRA Issue | HBASE-17633 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopc

[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-03-27 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943235#comment-15943235
 ] 

Anastasia Braginsky commented on HBASE-16438:
-

bq. Chance of getting GCed might be bit more when chunk is actually removed 
from Map (No refs to it at all) than still referring with one SoftRef.

I am sure you'll see no difference. Think what the chances are that all the 
cells in the (big sized) chunk are replaced till it is flushed in memory.

bq. So shall we do that also in this JIRA or should we just finish the patch in 
the current state and go with a new JIRA to solve the GC problem that we may 
face?

As we are designing the Creator now I think it should be written appropriately 
from the beginning. I mean with "convertible" references in the map, if we 
choose this way. Why to write it anyhow and then rewrite it?

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Attachments: HBASE-16438_1.patch, 
> HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17765) Reviving the merge possibility in the CompactingMemStore

2017-03-27 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943240#comment-15943240
 ] 

Anastasia Braginsky commented on HBASE-17765:
-

Changed, thanks!

> Reviving the merge possibility in the CompactingMemStore
> 
>
> Key: HBASE-17765
> URL: https://issues.apache.org/jira/browse/HBASE-17765
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Fix For: 2.0.0
>
> Attachments: HBASE-17765-V01.patch, HBASE-17765-V02.patch, 
> HBASE-17765-V03.patch, HBASE-17765-V04.patch, HBASE-17765-V05.patch
>
>
> According to the new performance results presented in the HBASE-16417 we see 
> that the read latency of the 90th percentile of the BASIC policy is too big 
> due to the need to traverse through too many segments in the pipeline. In 
> this JIRA we correct the bug in the merge sizing calculations and allow 
> pipeline size threshold to be a configurable parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17765) Reviving the merge possibility in the CompactingMemStore

2017-03-27 Thread Anastasia Braginsky (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anastasia Braginsky updated HBASE-17765:

Release Note: 
Reviving the merge of the compacting pipeline: making the limit on the number 
of the segments in the pipeline configurable and adding the merge test.

In order to customize the pipeline size limit change the value of the 
"hbase.hregion.compacting.pipeline.segments.limit" in the hbase-site.xml

Value 1 means to merge the segments on any flush-in-memory. Value higher than 
16 means no merge.

  was:Reviving the merge of the compacting pipeline: making the limit on the 
number of the segments in the pipeline configurable, adding merge test, fixing 
bug in sizes counting


> Reviving the merge possibility in the CompactingMemStore
> 
>
> Key: HBASE-17765
> URL: https://issues.apache.org/jira/browse/HBASE-17765
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Fix For: 2.0.0
>
> Attachments: HBASE-17765-V01.patch, HBASE-17765-V02.patch, 
> HBASE-17765-V03.patch, HBASE-17765-V04.patch, HBASE-17765-V05.patch
>
>
> According to the new performance results presented in the HBASE-16417 we see 
> that the read latency of the 90th percentile of the BASIC policy is too big 
> due to the need to traverse through too many segments in the pipeline. In 
> this JIRA we correct the bug in the merge sizing calculations and allow 
> pipeline size threshold to be a configurable parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17633) Update unflushed sequence id in SequenceIdAccounting after flush with the minimum sequence id in memstore

2017-03-27 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943244#comment-15943244
 ] 

Duo Zhang commented on HBASE-17633:
---

{quote}
can you please post a link to the RB, thanks.
{quote}

[~eshcar] This is only an initial patch. I want to try a pre commit first to 
see if I miss something. Will upload it to RB after I can confirm that we 
approach works.

Thanks. Let me check the failed UTs first.

> Update unflushed sequence id in SequenceIdAccounting after flush with the 
> minimum sequence id in memstore
> -
>
> Key: HBASE-17633
> URL: https://issues.apache.org/jira/browse/HBASE-17633
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17633.patch, HBASE-17633-v1.patch
>
>
> Now the tracking work is done by SequenceIdAccounting. And it is a little 
> tricky when dealing with flush. We should remove the mapping for the given 
> stores of a region from lowestUnflushedSequenceIds, so that we have space to 
> store the new lowest unflushed sequence id after flush. But we still need to 
> keep the old sequence ids in another map as we still need to use these values 
> when reporting to master to prevent data loss(think of the scenario that we 
> report the new lowest unflushed sequence id to master and we crashed before 
> actually flushed the data to disk).
> And when reviewing HBASE-17407, I found  that for CompactingMemStore, we have 
> to record the minimum sequence id.in memstore. We could just update the 
> mappings in SequenceIdAccounting using these values after flush. This means 
> we do not need to update the lowest unflushed sequence id in 
> SequenceIdAccounting, and also do not need to make space for the new lowest 
> unflushed when startCacheFlush, and also do not need the extra map to store 
> the old mappings.
> This could simplify our logic a lot. But this is a fundamental change so I 
> need sometime to implement, especially for modifying tests... And I also need 
> sometime to check if I miss something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17633) Update unflushed sequence id in SequenceIdAccounting after flush with the minimum sequence id in memstore

2017-03-27 Thread Eshcar Hillel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943252#comment-15943252
 ] 

Eshcar Hillel commented on HBASE-17633:
---

(y)

> Update unflushed sequence id in SequenceIdAccounting after flush with the 
> minimum sequence id in memstore
> -
>
> Key: HBASE-17633
> URL: https://issues.apache.org/jira/browse/HBASE-17633
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17633.patch, HBASE-17633-v1.patch
>
>
> Now the tracking work is done by SequenceIdAccounting. And it is a little 
> tricky when dealing with flush. We should remove the mapping for the given 
> stores of a region from lowestUnflushedSequenceIds, so that we have space to 
> store the new lowest unflushed sequence id after flush. But we still need to 
> keep the old sequence ids in another map as we still need to use these values 
> when reporting to master to prevent data loss(think of the scenario that we 
> report the new lowest unflushed sequence id to master and we crashed before 
> actually flushed the data to disk).
> And when reviewing HBASE-17407, I found  that for CompactingMemStore, we have 
> to record the minimum sequence id.in memstore. We could just update the 
> mappings in SequenceIdAccounting using these values after flush. This means 
> we do not need to update the lowest unflushed sequence id in 
> SequenceIdAccounting, and also do not need to make space for the new lowest 
> unflushed when startCacheFlush, and also do not need the extra map to store 
> the old mappings.
> This could simplify our logic a lot. But this is a fundamental change so I 
> need sometime to implement, especially for modifying tests... And I also need 
> sometime to check if I miss something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17839) "Data Model" section: Table 1 has only 5 data rows instead 6.

2017-03-27 Thread Evgeny Kincharov (JIRA)
Evgeny Kincharov created HBASE-17839:


 Summary: "Data Model" section: Table 1 has only 5 data rows 
instead 6.
 Key: HBASE-17839
 URL: https://issues.apache.org/jira/browse/HBASE-17839
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.0.0
Reporter: Evgeny Kincharov
Priority: Trivial






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17831) Support small scan in thrift2

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943271#comment-15943271
 ] 

Hadoop QA commented on HBASE-17831:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
33m 18s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 37s 
{color} | {color:green} hbase-thrift in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
8s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 58s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860656/HBASE-17831-master-v1.patch
 |
| JIRA Issue | HBASE-17831 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 01690fe6d5e0 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / c77e213 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6229/artifact/patchprocess/whitespace-eol.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6229/testReport/ |
| modules | C: hbase-thrift U: hbase-thrift |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6229/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Support small scan in thrift2
> -
>
> Key: HBASE-17831
> URL: https://issues.apache.org/jira/browse/HBASE-17831
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
> A

[jira] [Updated] (HBASE-17839) "Data Model" section: Table 1 has only 5 data rows instead 6.

2017-03-27 Thread Evgeny Kincharov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Kincharov updated HBASE-17839:
-
Status: Patch Available  (was: Open)

> "Data Model" section: Table 1 has only 5 data rows instead 6.
> -
>
> Key: HBASE-17839
> URL: https://issues.apache.org/jira/browse/HBASE-17839
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Evgeny Kincharov
>Priority: Trivial
>  Labels: documentation, patch-available, trivial
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17839) "Data Model" section: Table 1 has only 5 data rows instead 6.

2017-03-27 Thread Evgeny Kincharov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Kincharov updated HBASE-17839:
-
Attachment: 823ac3e0e619f7531513efe6e4b43a6fea7437c6.patch

> "Data Model" section: Table 1 has only 5 data rows instead 6.
> -
>
> Key: HBASE-17839
> URL: https://issues.apache.org/jira/browse/HBASE-17839
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Evgeny Kincharov
>Priority: Trivial
>  Labels: documentation, patch-available, trivial
> Attachments: 823ac3e0e619f7531513efe6e4b43a6fea7437c6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17839) "Data Model" section: Table 1 has only 5 data rows instead 6.

2017-03-27 Thread Evgeny Kincharov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evgeny Kincharov updated HBASE-17839:
-
Description: 
The table from http://hbase.apache.org/book.html#conceptual.view has only 5 
data rows.
Originally intended that there will be 6:
https://raw.githubusercontent.com/apache/hbase/master/src/main/asciidoc/_chapters/datamodel.adoc

> "Data Model" section: Table 1 has only 5 data rows instead 6.
> -
>
> Key: HBASE-17839
> URL: https://issues.apache.org/jira/browse/HBASE-17839
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Evgeny Kincharov
>Priority: Trivial
>  Labels: documentation, patch-available, trivial
> Attachments: 823ac3e0e619f7531513efe6e4b43a6fea7437c6.patch
>
>
> The table from http://hbase.apache.org/book.html#conceptual.view has only 5 
> data rows.
> Originally intended that there will be 6:
> https://raw.githubusercontent.com/apache/hbase/master/src/main/asciidoc/_chapters/datamodel.adoc



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17524) HBase 1.3.1 release

2017-03-27 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943290#comment-15943290
 ] 

Yu Li commented on HBASE-17524:
---

[~mantonov] Boss, since it seems most linked JIRAs have been closed, do we have 
a detailed plan of 1.3.1? I could help track and close the open ones if needed. 
Thanks.

> HBase 1.3.1 release
> ---
>
> Key: HBASE-17524
> URL: https://issues.apache.org/jira/browse/HBASE-17524
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Mikhail Antonov
>
> Let's have this umbrella jira to track backports and bufgixes that we want to 
> go to 1.3.1 release.
> Please add tasks and comment if you want something to be backported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16780) Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit

2017-03-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16780:
--
Attachment: HBASE-16780.master.001.patch

> Since move to protobuf3.1, Cells are limited to 64MB where previous they had 
> no limit
> -
>
> Key: HBASE-16780
> URL: https://issues.apache.org/jira/browse/HBASE-16780
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Affects Versions: 2.0.0
>Reporter: stack
>Priority: Critical
> Attachments: HBASE-16780.master.001.patch
>
>
> Change in protobuf behavior noticed by [~mbertozzi]. His test 
> TestStressWALProcedureStore#testEntrySizeLimit keeps upping size we write and 
> he found that now we are bound at 64MB. Digging, yeah, there is a check in 
> place that was not there before. Filed 
> https://github.com/grpc/grpc-java/issues/2324 but making issue here in 
> meantime in case we have to note a change-in-behavior in hbase-2.0.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16780) Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit

2017-03-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16780:
--
Assignee: stack
Release Note: Upgrade internal pb to 3.2 from 3.1. 3.2 has fix for 64MB 
limit.
  Status: Patch Available  (was: Open)

> Since move to protobuf3.1, Cells are limited to 64MB where previous they had 
> no limit
> -
>
> Key: HBASE-16780
> URL: https://issues.apache.org/jira/browse/HBASE-16780
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: HBASE-16780.master.001.patch
>
>
> Change in protobuf behavior noticed by [~mbertozzi]. His test 
> TestStressWALProcedureStore#testEntrySizeLimit keeps upping size we write and 
> he found that now we are bound at 64MB. Digging, yeah, there is a check in 
> place that was not there before. Filed 
> https://github.com/grpc/grpc-java/issues/2324 but making issue here in 
> meantime in case we have to note a change-in-behavior in hbase-2.0.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17633) Update unflushed sequence id in SequenceIdAccounting after flush with the minimum sequence id in memstore

2017-03-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943427#comment-15943427
 ] 

stack commented on HBASE-17633:
---

Skimmed. Looks great. Can help do deeper review later.

> Update unflushed sequence id in SequenceIdAccounting after flush with the 
> minimum sequence id in memstore
> -
>
> Key: HBASE-17633
> URL: https://issues.apache.org/jira/browse/HBASE-17633
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17633.patch, HBASE-17633-v1.patch
>
>
> Now the tracking work is done by SequenceIdAccounting. And it is a little 
> tricky when dealing with flush. We should remove the mapping for the given 
> stores of a region from lowestUnflushedSequenceIds, so that we have space to 
> store the new lowest unflushed sequence id after flush. But we still need to 
> keep the old sequence ids in another map as we still need to use these values 
> when reporting to master to prevent data loss(think of the scenario that we 
> report the new lowest unflushed sequence id to master and we crashed before 
> actually flushed the data to disk).
> And when reviewing HBASE-17407, I found  that for CompactingMemStore, we have 
> to record the minimum sequence id.in memstore. We could just update the 
> mappings in SequenceIdAccounting using these values after flush. This means 
> we do not need to update the lowest unflushed sequence id in 
> SequenceIdAccounting, and also do not need to make space for the new lowest 
> unflushed when startCacheFlush, and also do not need the extra map to store 
> the old mappings.
> This could simplify our logic a lot. But this is a fundamental change so I 
> need sometime to implement, especially for modifying tests... And I also need 
> sometime to check if I miss something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17765) Reviving the merge possibility in the CompactingMemStore

2017-03-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943431#comment-15943431
 ] 

stack commented on HBASE-17765:
---

+1

> Reviving the merge possibility in the CompactingMemStore
> 
>
> Key: HBASE-17765
> URL: https://issues.apache.org/jira/browse/HBASE-17765
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Fix For: 2.0.0
>
> Attachments: HBASE-17765-V01.patch, HBASE-17765-V02.patch, 
> HBASE-17765-V03.patch, HBASE-17765-V04.patch, HBASE-17765-V05.patch
>
>
> According to the new performance results presented in the HBASE-16417 we see 
> that the read latency of the 90th percentile of the BASIC policy is too big 
> due to the need to traverse through too many segments in the pipeline. In 
> this JIRA we correct the bug in the merge sizing calculations and allow 
> pipeline size threshold to be a configurable parameter.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17835) Spelling mistakes in the Java source

2017-03-27 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-17835:
---
Fix Version/s: 2.0.0

> Spelling mistakes in the Java source
> 
>
> Key: HBASE-17835
> URL: https://issues.apache.org/jira/browse/HBASE-17835
> Project: HBase
>  Issue Type: Improvement
>Reporter: Qilin Cao
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-17835-001.patch
>
>
> I found spelling mistakes in the hbase java source files viz. recieved 
> instead of received, and SpanReciever instead of SpanReceiver



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17835) Spelling mistakes in the Java source

2017-03-27 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-17835:
---
Status: Patch Available  (was: Open)

> Spelling mistakes in the Java source
> 
>
> Key: HBASE-17835
> URL: https://issues.apache.org/jira/browse/HBASE-17835
> Project: HBase
>  Issue Type: Improvement
>Reporter: Qilin Cao
>Priority: Trivial
> Fix For: 2.0.0
>
> Attachments: HBASE-17835-001.patch
>
>
> I found spelling mistakes in the hbase java source files viz. recieved 
> instead of received, and SpanReciever instead of SpanReceiver



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16780) Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943459#comment-15943459
 ] 

Hadoop QA commented on HBASE-16780:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 54s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
9s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 41s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860667/HBASE-16780.master.001.patch
 |
| JIRA Issue | HBASE-16780 |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  compile  |
| uname | Linux 8a00abeb4a01 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / c77e213 |
| Default Java | 1.8.0_121 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6231/testReport/ |
| modules | C: hbase-protocol-shaded U: hbase-protocol-shaded |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6231/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Since move to protobuf3.1, Cells are limited to 64MB where previous they had 
> no limit
> -
>
> Key: HBASE-16780
> URL: https://issues.apache.org/jira/browse/HBASE-16780
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: HBASE-16780.master.001.patch
>
>
> Change in protobuf behavior noticed by [~mbertozzi]. His test 
> TestStressWALProcedureStore#testEntrySizeLimit keeps upping size we write and 
> he found that now we are bound at 64MB. Digging, yeah, there is a check in 
> place that was not there before. Filed 
> https://github.com/grpc/grpc-java/issues/2324 but making issue here in 
> meantime in case we have 

[jira] [Commented] (HBASE-17838) Replace fixed Executor Threads with dynamic thread pool

2017-03-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943461#comment-15943461
 ] 

stack commented on HBASE-17838:
---

bq. the default logic is somehow dynamic in the number of threads allocated, is 
there any advantage of replacing it other than having ThreadPoolExecutor as a 
common implementation?

I've not done the study to see how worker thread count currently changes over 
life of a running procedure engine; I thought it constant?

bq. the default implementation uses a custom configurable logic to create new 
threads (definition of stuck threads, ratio of stuck threads etc). I think it 
would be quite hacky to put the same logic to a custom threadpoolexecutor

Probably

bq. in the current design, executer worker threads are polling the scheduler to 
gather tasks, what I tried to maintain in this patch, but I would feel more 
natural to make scheduler submit tasks to the executor.

Yes. For an executor, submit would make more sense. That'd be a radical change 
though.

bq. In sum: Is this a good direction at all?

Doesn't seem so, not until more experience with the procedure engine. Want to 
leave this aside then [~gubjanos]? Or just close it since you've done the study 
and we can mark it off from the list in the parent issue?

Thanks.

> Replace fixed Executor Threads with dynamic thread pool 
> 
>
> Key: HBASE-17838
> URL: https://issues.apache.org/jira/browse/HBASE-17838
> Project: HBase
>  Issue Type: Sub-task
>  Components: Performance, proc-v2
>Reporter: Janos Gub
>Assignee: Janos Gub
> Fix For: 2.0.0
>
> Attachments: initial.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16780) Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit

2017-03-27 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16780:
--
Attachment: HBASE-16780.master.002.patch

> Since move to protobuf3.1, Cells are limited to 64MB where previous they had 
> no limit
> -
>
> Key: HBASE-16780
> URL: https://issues.apache.org/jira/browse/HBASE-16780
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: HBASE-16780.master.001.patch, 
> HBASE-16780.master.002.patch
>
>
> Change in protobuf behavior noticed by [~mbertozzi]. His test 
> TestStressWALProcedureStore#testEntrySizeLimit keeps upping size we write and 
> he found that now we are bound at 64MB. Digging, yeah, there is a check in 
> place that was not there before. Filed 
> https://github.com/grpc/grpc-java/issues/2324 but making issue here in 
> meantime in case we have to note a change-in-behavior in hbase-2.0.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16780) Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit

2017-03-27 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943479#comment-15943479
 ] 

stack commented on HBASE-16780:
---

Add minor edit in hbase-server to be sure all still works.

> Since move to protobuf3.1, Cells are limited to 64MB where previous they had 
> no limit
> -
>
> Key: HBASE-16780
> URL: https://issues.apache.org/jira/browse/HBASE-16780
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: HBASE-16780.master.001.patch, 
> HBASE-16780.master.002.patch
>
>
> Change in protobuf behavior noticed by [~mbertozzi]. His test 
> TestStressWALProcedureStore#testEntrySizeLimit keeps upping size we write and 
> he found that now we are bound at 64MB. Digging, yeah, there is a check in 
> place that was not there before. Filed 
> https://github.com/grpc/grpc-java/issues/2324 but making issue here in 
> meantime in case we have to note a change-in-behavior in hbase-2.0.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17839) "Data Model" section: Table 1 has only 5 data rows instead 6.

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943548#comment-15943548
 ] 

Hadoop QA commented on HBASE-17839:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 5s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 25s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 117m 36s 
{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 160m 26s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860661/823ac3e0e619f7531513efe6e4b43a6fea7437c6.patch
 |
| JIRA Issue | HBASE-17839 |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux 8891b6204e0e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / c77e213 |
| Default Java | 1.8.0_121 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6230/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6230/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6230/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> "Data Model" section: Table 1 has only 5 data rows instead 6.
> -
>
> Key: HBASE-17839
> URL: https://issues.apache.org/jira/browse/HBASE-17839
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.0
>Reporter: Evgeny Kincharov
>Priority: Trivial
>  Labels: documentation, patch-available, trivial
> Attachments: 823ac3e0e619f7531513efe6e4b43a6fea7437c6.patch
>
>
> The table from http://hbase.apache.org/book.html#conceptual.view has only 5 
> data rows.
> Originally intended that there will be 6:
> https://raw.githubusercontent.com/apache/hbase/master/src/main/asciidoc/_chapters/datamodel.adoc



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17821) The CompoundConfiguration#toString is wrong

2017-03-27 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943550#comment-15943550
 ] 

Yi Liang commented on HBASE-17821:
--

Chia-Ping and stack,   thanks for reviewing

> The CompoundConfiguration#toString is wrong
> ---
>
> Key: HBASE-17821
> URL: https://issues.apache.org/jira/browse/HBASE-17821
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Yi Liang
>Priority: Trivial
>  Labels: beginner
> Attachments: HBase-17821-V1.patch, HBase-17821-V1.patch
>
>
> Find this bug when reading code. We dont use the API, so it is a trivial bug.
> sb.append(this.configs); -> sb.append(m);
> {noformat}
>   @Override
>   public String toString() {
> StringBuffer sb = new StringBuffer();
> sb.append("CompoundConfiguration: " + this.configs.size() + " configs");
> for (ImmutableConfigMap m : this.configs) {
>   sb.append(this.configs);
> }
> return sb.toString();
>   }
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17287) Master becomes a zombie if filesystem object closes

2017-03-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17287:
---
Attachment: 17287.branch-1.v3.txt

> Master becomes a zombie if filesystem object closes
> ---
>
> Key: HBASE-17287
> URL: https://issues.apache.org/jira/browse/HBASE-17287
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Clay B.
>Assignee: Ted Yu
> Fix For: 1.4.0, 2.0
>
> Attachments: 17287.branch-1.v3.txt, 17287.master.v2.txt, 
> 17287.master.v3.txt, 17287.v2.txt
>
>
> We have seen an issue whereby if the HDFS is unstable and the HBase master's 
> HDFS client is unable to stabilize before 
> {{dfs.client.failover.max.attempts}} then the master's filesystem object 
> closes. This seems to result in an HBase master which will continue to run 
> (process and znode exists) but no meaningful work can be done (e.g. assigning 
> meta).What we saw in our HBase master logs was:{code}2016-12-01 19:19:08,192 
> ERROR org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler: 
> Caught M_META_SERVER_SHUTDOWN, count=1java.io.IOException: failed log 
> splitting for cluster-r5n12.bloomberg.com,60200,1480632863218, will retryat 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.process(MetaServerShutdownHandler.java:84)at
>  org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)at
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)at
>  java.lang.Thread.run(Thread.java:745)Caused by: java.io.IOException: 
> Filesystem closed{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17339) Scan-Memory-First Optimization for Get Operations

2017-03-27 Thread Ben Manes (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943573#comment-15943573
 ] 

Ben Manes commented on HBASE-17339:
---

I think its really difficult to tell, but I'd guess that there might be a small 
gain.

Those 30M misses sound compulsory, meaning that they would occur regardless of 
the cache size. Therefore we'd expect an unbounded cache to have 87% hit rate 
at 400M accesses or 90% at 300M. If you're observing 80%, then at best there is 
10% boost. If Bélády's optimal is lower then there is even less of a difference 
to boost by. It could be that SLRU captures frequency well enough that both 
policies are equivalent.

The [MultiQueue 
paper|https://www.usenix.org/legacy/event/usenix01/full_papers/zhou/zhou.pdf] 
argues that 2nd level cache access patterns are frequency skewed. The 
LruBlockCache only retains if there were multiple accesses, not the counts, and 
tries to evict fairly across the buckets. Since TinyLFU captures a longer tail 
(freq. of items outside of the cache), there is a chance that it can make a 
better prediction. But we wouldn't know without an access trace to simulate 
with.

I suspect that the high hit rate means there isn't much cache pollution to 
lower the hit rate, so a good enough victim is chosen. At the tail most of the 
entries have a relatively similar frequency, too. It would be fun to find out, 
but you probably won't think it was worth the effort.

> Scan-Memory-First Optimization for Get Operations
> -
>
> Key: HBASE-17339
> URL: https://issues.apache.org/jira/browse/HBASE-17339
> Project: HBase
>  Issue Type: Improvement
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
> Attachments: HBASE-17339-V01.patch, HBASE-17339-V02.patch, 
> HBASE-17339-V03.patch, HBASE-17339-V03.patch, HBASE-17339-V04.patch, 
> HBASE-17339-V05.patch, HBASE-17339-V06.patch, read-latency-mixed-workload.jpg
>
>
> The current implementation of a get operation (to retrieve values for a 
> specific key) scans through all relevant stores of the region; for each store 
> both memory components (memstores segments) and disk components (hfiles) are 
> scanned in parallel.
> We suggest to apply an optimization that speculatively scans memory-only 
> components first and only if the result is incomplete scans both memory and 
> disk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16469) Several log refactoring/improvement suggestions

2017-03-27 Thread Nemo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemo Chen updated HBASE-16469:
--
Attachment: (was: HBASE-16469.master.001.patch)

> Several log refactoring/improvement suggestions
> ---
>
> Key: HBASE-16469
> URL: https://issues.apache.org/jira/browse/HBASE-16469
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 1.2.2
>Reporter: Nemo Chen
>  Labels: easyfix, easytest
> Fix For: 2.0.0, 1.4.0
>
>
> *method invocation replaced by variable*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java
> line 57: {code}Path file = fStat.getPath();{code}
> line 74: {code}LOG.error("Failed to lookup status of:" + fStat.getPath() + ", 
> keeping it just incase.", e); {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
> line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
> line 142: {code}LOG.warn("Can't close region: was already closed during 
> close(): " +
> regionInfo.getRegionNameAsString()); {code}
> In the above two examples, the method invocations are assigned to the 
> variables before the logging code. These method invocations should be 
> replaced by variables in case of simplicity and readability
> 
> *method invocation in return statement*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
> line 5455:
> {code}
> public String toString() {
> return getRegionInfo().getRegionNameAsString();
>   }
> {code}
> line 1260:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it is closing or closed");
> {code}
> line 1265:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it has references");
> {code}
> line 1413:
> {code} 
> LOG.info("Running close preflush of " + 
> getRegionInfo().getRegionNameAsString());
> {code}
> In these above examples, the "getRegionInfo().getRegionNameAsString())" is 
> the return statement of method "toString" in the same class. They should be 
> replaced with “this”   in case of simplicity and readability.
> 
> *check the logged variable if it is null*
> hbase-1.2.2/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
> line 88: 
> {code}
> if ((sshUserName != null && sshUserName.length() > 0) ||
> (sshOptions != null && sshOptions.length() > 0)) {
>   LOG.info("Running with SSH user [" + sshUserName + "] and options [" + 
> sshOptions + "]");
> }
> {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
> line 980:
> {code}
> if ((regionState == null && latestState != null)
>   || (regionState != null && latestState == null)
>   || (regionState != null && latestState != null
> && latestState.getState() != regionState.getState())) {
> LOG.warn("Region state changed from " + regionState + " to "
>   + latestState + ", while acquiring lock");
>   }
> {code}
> In the above example, the logging variable could null at run time. It is a 
> bad  practice to include null variables inside logs.
> 
> *variable in byte printed directly*
> hbase-1.2.2/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java
> line 145: 
> {code}
> byte[] rowKey = dataGenerator.getDeterministicUniqueKey(rowKeyBase);
> {code}
> line 184:
> {code}
> LOG.error("Failed to update the row with key = [" + rowKey
>   + "], since we could not get the original row");
> {code}
> rowKey should be printed as Bytes.toString(rowKey).
>  
> *object toString contain mi*
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
> {code}
> LOG.warn("#" + id + ", the task was rejected by the pool. This is 
> unexpected."+ " Server is "+ server.getServerName(),t);
> {code}
> server is an instance of class ServerName, we found ServerName.java:
> hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
> {code}
>   @Override
>   public String toString() {
> return getServerName();
>   }
> {code}
> the toString method returns getServerName(), so the "server.getServerName()" 
> should be replaced with "server" in case of simplicity and readability
> Similar examples are in:
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
> {code}
> LOG.info("Clearing out PFFE for server " + server.getServerName());
> return getServerName();
> {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
> line 705: 
> {code} LOG.debug(getN

[jira] [Updated] (HBASE-16469) Several log refactoring/improvement suggestions

2017-03-27 Thread Nemo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemo Chen updated HBASE-16469:
--
Attachment: HBASE-16469.master.001.patch

> Several log refactoring/improvement suggestions
> ---
>
> Key: HBASE-16469
> URL: https://issues.apache.org/jira/browse/HBASE-16469
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 1.2.5
>Reporter: Nemo Chen
>  Labels: easyfix, easytest
> Fix For: 1.2.5
>
> Attachments: HBASE-16469.master.001.patch
>
>
> *method invocation replaced by variable*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java
> line 57: {code}Path file = fStat.getPath();{code}
> line 74: {code}LOG.error("Failed to lookup status of:" + fStat.getPath() + ", 
> keeping it just incase.", e); {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
> line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
> line 142: {code}LOG.warn("Can't close region: was already closed during 
> close(): " +
> regionInfo.getRegionNameAsString()); {code}
> In the above two examples, the method invocations are assigned to the 
> variables before the logging code. These method invocations should be 
> replaced by variables in case of simplicity and readability
> 
> *method invocation in return statement*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
> line 5455:
> {code}
> public String toString() {
> return getRegionInfo().getRegionNameAsString();
>   }
> {code}
> line 1260:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it is closing or closed");
> {code}
> line 1265:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it has references");
> {code}
> line 1413:
> {code} 
> LOG.info("Running close preflush of " + 
> getRegionInfo().getRegionNameAsString());
> {code}
> In these above examples, the "getRegionInfo().getRegionNameAsString())" is 
> the return statement of method "toString" in the same class. They should be 
> replaced with “this”   in case of simplicity and readability.
> 
> *check the logged variable if it is null*
> hbase-1.2.2/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
> line 88: 
> {code}
> if ((sshUserName != null && sshUserName.length() > 0) ||
> (sshOptions != null && sshOptions.length() > 0)) {
>   LOG.info("Running with SSH user [" + sshUserName + "] and options [" + 
> sshOptions + "]");
> }
> {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
> line 980:
> {code}
> if ((regionState == null && latestState != null)
>   || (regionState != null && latestState == null)
>   || (regionState != null && latestState != null
> && latestState.getState() != regionState.getState())) {
> LOG.warn("Region state changed from " + regionState + " to "
>   + latestState + ", while acquiring lock");
>   }
> {code}
> In the above example, the logging variable could null at run time. It is a 
> bad  practice to include null variables inside logs.
> 
> *variable in byte printed directly*
> hbase-1.2.2/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java
> line 145: 
> {code}
> byte[] rowKey = dataGenerator.getDeterministicUniqueKey(rowKeyBase);
> {code}
> line 184:
> {code}
> LOG.error("Failed to update the row with key = [" + rowKey
>   + "], since we could not get the original row");
> {code}
> rowKey should be printed as Bytes.toString(rowKey).
>  
> *object toString contain mi*
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
> {code}
> LOG.warn("#" + id + ", the task was rejected by the pool. This is 
> unexpected."+ " Server is "+ server.getServerName(),t);
> {code}
> server is an instance of class ServerName, we found ServerName.java:
> hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
> {code}
>   @Override
>   public String toString() {
> return getServerName();
>   }
> {code}
> the toString method returns getServerName(), so the "server.getServerName()" 
> should be replaced with "server" in case of simplicity and readability
> Similar examples are in:
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
> {code}
> LOG.info("Clearing out PFFE for server " + server.getServerName());
> return getServerName();
> {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java

[jira] [Updated] (HBASE-16469) Several log refactoring/improvement suggestions

2017-03-27 Thread Nemo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemo Chen updated HBASE-16469:
--
Fix Version/s: (was: 1.4.0)
   (was: 2.0.0)
   1.2.5
Affects Version/s: (was: 1.2.2)
   1.2.5
   Status: Open  (was: Patch Available)

> Several log refactoring/improvement suggestions
> ---
>
> Key: HBASE-16469
> URL: https://issues.apache.org/jira/browse/HBASE-16469
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 1.2.5
>Reporter: Nemo Chen
>  Labels: easyfix, easytest
> Fix For: 1.2.5
>
> Attachments: HBASE-16469.master.001.patch
>
>
> *method invocation replaced by variable*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java
> line 57: {code}Path file = fStat.getPath();{code}
> line 74: {code}LOG.error("Failed to lookup status of:" + fStat.getPath() + ", 
> keeping it just incase.", e); {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
> line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
> line 142: {code}LOG.warn("Can't close region: was already closed during 
> close(): " +
> regionInfo.getRegionNameAsString()); {code}
> In the above two examples, the method invocations are assigned to the 
> variables before the logging code. These method invocations should be 
> replaced by variables in case of simplicity and readability
> 
> *method invocation in return statement*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
> line 5455:
> {code}
> public String toString() {
> return getRegionInfo().getRegionNameAsString();
>   }
> {code}
> line 1260:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it is closing or closed");
> {code}
> line 1265:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it has references");
> {code}
> line 1413:
> {code} 
> LOG.info("Running close preflush of " + 
> getRegionInfo().getRegionNameAsString());
> {code}
> In these above examples, the "getRegionInfo().getRegionNameAsString())" is 
> the return statement of method "toString" in the same class. They should be 
> replaced with “this”   in case of simplicity and readability.
> 
> *check the logged variable if it is null*
> hbase-1.2.2/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
> line 88: 
> {code}
> if ((sshUserName != null && sshUserName.length() > 0) ||
> (sshOptions != null && sshOptions.length() > 0)) {
>   LOG.info("Running with SSH user [" + sshUserName + "] and options [" + 
> sshOptions + "]");
> }
> {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
> line 980:
> {code}
> if ((regionState == null && latestState != null)
>   || (regionState != null && latestState == null)
>   || (regionState != null && latestState != null
> && latestState.getState() != regionState.getState())) {
> LOG.warn("Region state changed from " + regionState + " to "
>   + latestState + ", while acquiring lock");
>   }
> {code}
> In the above example, the logging variable could null at run time. It is a 
> bad  practice to include null variables inside logs.
> 
> *variable in byte printed directly*
> hbase-1.2.2/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java
> line 145: 
> {code}
> byte[] rowKey = dataGenerator.getDeterministicUniqueKey(rowKeyBase);
> {code}
> line 184:
> {code}
> LOG.error("Failed to update the row with key = [" + rowKey
>   + "], since we could not get the original row");
> {code}
> rowKey should be printed as Bytes.toString(rowKey).
>  
> *object toString contain mi*
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
> {code}
> LOG.warn("#" + id + ", the task was rejected by the pool. This is 
> unexpected."+ " Server is "+ server.getServerName(),t);
> {code}
> server is an instance of class ServerName, we found ServerName.java:
> hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
> {code}
>   @Override
>   public String toString() {
> return getServerName();
>   }
> {code}
> the toString method returns getServerName(), so the "server.getServerName()" 
> should be replaced with "server" in case of simplicity and readability
> Similar examples are in:
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
> {code}
> LO

[jira] [Updated] (HBASE-16469) Several log refactoring/improvement suggestions

2017-03-27 Thread Nemo Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nemo Chen updated HBASE-16469:
--
Status: Patch Available  (was: Open)

> Several log refactoring/improvement suggestions
> ---
>
> Key: HBASE-16469
> URL: https://issues.apache.org/jira/browse/HBASE-16469
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 1.2.5
>Reporter: Nemo Chen
>  Labels: easyfix, easytest
> Fix For: 1.2.5
>
> Attachments: HBASE-16469.master.001.patch
>
>
> *method invocation replaced by variable*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java
> line 57: {code}Path file = fStat.getPath();{code}
> line 74: {code}LOG.error("Failed to lookup status of:" + fStat.getPath() + ", 
> keeping it just incase.", e); {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
> line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
> line 142: {code}LOG.warn("Can't close region: was already closed during 
> close(): " +
> regionInfo.getRegionNameAsString()); {code}
> In the above two examples, the method invocations are assigned to the 
> variables before the logging code. These method invocations should be 
> replaced by variables in case of simplicity and readability
> 
> *method invocation in return statement*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
> line 5455:
> {code}
> public String toString() {
> return getRegionInfo().getRegionNameAsString();
>   }
> {code}
> line 1260:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it is closing or closed");
> {code}
> line 1265:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it has references");
> {code}
> line 1413:
> {code} 
> LOG.info("Running close preflush of " + 
> getRegionInfo().getRegionNameAsString());
> {code}
> In these above examples, the "getRegionInfo().getRegionNameAsString())" is 
> the return statement of method "toString" in the same class. They should be 
> replaced with “this”   in case of simplicity and readability.
> 
> *check the logged variable if it is null*
> hbase-1.2.2/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
> line 88: 
> {code}
> if ((sshUserName != null && sshUserName.length() > 0) ||
> (sshOptions != null && sshOptions.length() > 0)) {
>   LOG.info("Running with SSH user [" + sshUserName + "] and options [" + 
> sshOptions + "]");
> }
> {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
> line 980:
> {code}
> if ((regionState == null && latestState != null)
>   || (regionState != null && latestState == null)
>   || (regionState != null && latestState != null
> && latestState.getState() != regionState.getState())) {
> LOG.warn("Region state changed from " + regionState + " to "
>   + latestState + ", while acquiring lock");
>   }
> {code}
> In the above example, the logging variable could null at run time. It is a 
> bad  practice to include null variables inside logs.
> 
> *variable in byte printed directly*
> hbase-1.2.2/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java
> line 145: 
> {code}
> byte[] rowKey = dataGenerator.getDeterministicUniqueKey(rowKeyBase);
> {code}
> line 184:
> {code}
> LOG.error("Failed to update the row with key = [" + rowKey
>   + "], since we could not get the original row");
> {code}
> rowKey should be printed as Bytes.toString(rowKey).
>  
> *object toString contain mi*
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
> {code}
> LOG.warn("#" + id + ", the task was rejected by the pool. This is 
> unexpected."+ " Server is "+ server.getServerName(),t);
> {code}
> server is an instance of class ServerName, we found ServerName.java:
> hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
> {code}
>   @Override
>   public String toString() {
> return getServerName();
>   }
> {code}
> the toString method returns getServerName(), so the "server.getServerName()" 
> should be replaced with "server" in case of simplicity and readability
> Similar examples are in:
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
> {code}
> LOG.info("Clearing out PFFE for server " + server.getServerName());
> return getServerName();
> {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
> li

[jira] [Commented] (HBASE-16469) Several log refactoring/improvement suggestions

2017-03-27 Thread Nemo Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943626#comment-15943626
 ] 

Nemo Chen commented on HBASE-16469:
---

[~busbey] No worries and thanks for pointing it out!
I have updated the revised patch against current master branch. Please let me 
know if there are further issues.

> Several log refactoring/improvement suggestions
> ---
>
> Key: HBASE-16469
> URL: https://issues.apache.org/jira/browse/HBASE-16469
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 1.2.5
>Reporter: Nemo Chen
>  Labels: easyfix, easytest
> Fix For: 1.2.5
>
> Attachments: HBASE-16469.master.001.patch
>
>
> *method invocation replaced by variable*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java
> line 57: {code}Path file = fStat.getPath();{code}
> line 74: {code}LOG.error("Failed to lookup status of:" + fStat.getPath() + ", 
> keeping it just incase.", e); {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
> line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
> line 142: {code}LOG.warn("Can't close region: was already closed during 
> close(): " +
> regionInfo.getRegionNameAsString()); {code}
> In the above two examples, the method invocations are assigned to the 
> variables before the logging code. These method invocations should be 
> replaced by variables in case of simplicity and readability
> 
> *method invocation in return statement*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
> line 5455:
> {code}
> public String toString() {
> return getRegionInfo().getRegionNameAsString();
>   }
> {code}
> line 1260:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it is closing or closed");
> {code}
> line 1265:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it has references");
> {code}
> line 1413:
> {code} 
> LOG.info("Running close preflush of " + 
> getRegionInfo().getRegionNameAsString());
> {code}
> In these above examples, the "getRegionInfo().getRegionNameAsString())" is 
> the return statement of method "toString" in the same class. They should be 
> replaced with “this”   in case of simplicity and readability.
> 
> *check the logged variable if it is null*
> hbase-1.2.2/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
> line 88: 
> {code}
> if ((sshUserName != null && sshUserName.length() > 0) ||
> (sshOptions != null && sshOptions.length() > 0)) {
>   LOG.info("Running with SSH user [" + sshUserName + "] and options [" + 
> sshOptions + "]");
> }
> {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
> line 980:
> {code}
> if ((regionState == null && latestState != null)
>   || (regionState != null && latestState == null)
>   || (regionState != null && latestState != null
> && latestState.getState() != regionState.getState())) {
> LOG.warn("Region state changed from " + regionState + " to "
>   + latestState + ", while acquiring lock");
>   }
> {code}
> In the above example, the logging variable could null at run time. It is a 
> bad  practice to include null variables inside logs.
> 
> *variable in byte printed directly*
> hbase-1.2.2/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java
> line 145: 
> {code}
> byte[] rowKey = dataGenerator.getDeterministicUniqueKey(rowKeyBase);
> {code}
> line 184:
> {code}
> LOG.error("Failed to update the row with key = [" + rowKey
>   + "], since we could not get the original row");
> {code}
> rowKey should be printed as Bytes.toString(rowKey).
>  
> *object toString contain mi*
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
> {code}
> LOG.warn("#" + id + ", the task was rejected by the pool. This is 
> unexpected."+ " Server is "+ server.getServerName(),t);
> {code}
> server is an instance of class ServerName, we found ServerName.java:
> hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
> {code}
>   @Override
>   public String toString() {
> return getServerName();
>   }
> {code}
> the toString method returns getServerName(), so the "server.getServerName()" 
> should be replaced with "server" in case of simplicity and readability
> Similar examples are in:
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
> {code}
> LOG.info

[jira] [Updated] (HBASE-16469) Several log refactoring/improvement suggestions

2017-03-27 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-16469:

Fix Version/s: (was: 1.2.5)
   1.4.0
   2.0.0

> Several log refactoring/improvement suggestions
> ---
>
> Key: HBASE-16469
> URL: https://issues.apache.org/jira/browse/HBASE-16469
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 1.2.5
>Reporter: Nemo Chen
>  Labels: easyfix, easytest
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16469.master.001.patch
>
>
> *method invocation replaced by variable*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/backup/example/LongTermArchivingHFileCleaner.java
> line 57: {code}Path file = fStat.getPath();{code}
> line 74: {code}LOG.error("Failed to lookup status of:" + fStat.getPath() + ", 
> keeping it just incase.", e); {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/handler/CloseRegionHandler.java
> line 118: {code}String name = regionInfo.getRegionNameAsString();{code}
> line 142: {code}LOG.warn("Can't close region: was already closed during 
> close(): " +
> regionInfo.getRegionNameAsString()); {code}
> In the above two examples, the method invocations are assigned to the 
> variables before the logging code. These method invocations should be 
> replaced by variables in case of simplicity and readability
> 
> *method invocation in return statement*
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
> line 5455:
> {code}
> public String toString() {
> return getRegionInfo().getRegionNameAsString();
>   }
> {code}
> line 1260:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it is closing or closed");
> {code}
> line 1265:
> {code}
> LOG.debug("Region " + getRegionInfo().getRegionNameAsString()
>   + " is not mergeable because it has references");
> {code}
> line 1413:
> {code} 
> LOG.info("Running close preflush of " + 
> getRegionInfo().getRegionNameAsString());
> {code}
> In these above examples, the "getRegionInfo().getRegionNameAsString())" is 
> the return statement of method "toString" in the same class. They should be 
> replaced with “this”   in case of simplicity and readability.
> 
> *check the logged variable if it is null*
> hbase-1.2.2/hbase-it/src/test/java/org/apache/hadoop/hbase/HBaseClusterManager.java
> line 88: 
> {code}
> if ((sshUserName != null && sshUserName.length() > 0) ||
> (sshOptions != null && sshOptions.length() > 0)) {
>   LOG.info("Running with SSH user [" + sshUserName + "] and options [" + 
> sshOptions + "]");
> }
> {code}
> hbase-1.2.2/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
> line 980:
> {code}
> if ((regionState == null && latestState != null)
>   || (regionState != null && latestState == null)
>   || (regionState != null && latestState != null
> && latestState.getState() != regionState.getState())) {
> LOG.warn("Region state changed from " + regionState + " to "
>   + latestState + ", while acquiring lock");
>   }
> {code}
> In the above example, the logging variable could null at run time. It is a 
> bad  practice to include null variables inside logs.
> 
> *variable in byte printed directly*
> hbase-1.2.2/hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedUpdater.java
> line 145: 
> {code}
> byte[] rowKey = dataGenerator.getDeterministicUniqueKey(rowKeyBase);
> {code}
> line 184:
> {code}
> LOG.error("Failed to update the row with key = [" + rowKey
>   + "], since we could not get the original row");
> {code}
> rowKey should be printed as Bytes.toString(rowKey).
>  
> *object toString contain mi*
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
> {code}
> LOG.warn("#" + id + ", the task was rejected by the pool. This is 
> unexpected."+ " Server is "+ server.getServerName(),t);
> {code}
> server is an instance of class ServerName, we found ServerName.java:
> hbase-client/src/main/java/org/apache/hadoop/hbase/ServerName.java
> {code}
>   @Override
>   public String toString() {
> return getServerName();
>   }
> {code}
> the toString method returns getServerName(), so the "server.getServerName()" 
> should be replaced with "server" in case of simplicity and readability
> Similar examples are in:
> hbase-1.2.2/hbase-client/src/main/java/org/apache/hadoop/hbase/client/PreemptiveFastFailInterceptor.java
> {code}
> LOG.info("Clearing out PFFE for server " + server.getServerName());
> return getServerName();
> {code}
> hbase-1.2.2/hbase-server/src/m

[jira] [Commented] (HBASE-17287) Master becomes a zombie if filesystem object closes

2017-03-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943637#comment-15943637
 ] 

Ted Yu commented on HBASE-17287:


Performed the above procedure on 1.1 cluster patched with 17287.branch-1.v3.txt
Once the meta server was killed, I observed the following in master log:
{code}
2017-03-27 16:52:01,080 FATAL [MASTER_SERVER_OPERATIONS-cn013:16000-1] 
master.HMaster: Master server abort: loaded coprocessors are: 
[org.apache.hadoop.hbase.backup.   master.BackupController]
2017-03-27 16:52:01,080 FATAL [MASTER_SERVER_OPERATIONS-cn013:16000-1] 
master.HMaster: Shutting down HBase cluster: file system not available
java.io.IOException: File system is in safemode, it can't be written now
at 
org.apache.hadoop.hbase.util.FSUtils.checkDfsSafeMode(FSUtils.java:561)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.checkFileSystem(MasterFileSystem.java:202)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.getLogDirs(MasterFileSystem.java:372)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:425)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:402)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:319)
at 
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:213)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{code}

> Master becomes a zombie if filesystem object closes
> ---
>
> Key: HBASE-17287
> URL: https://issues.apache.org/jira/browse/HBASE-17287
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Clay B.
>Assignee: Ted Yu
> Fix For: 1.4.0, 2.0
>
> Attachments: 17287.branch-1.v3.txt, 17287.master.v2.txt, 
> 17287.master.v3.txt, 17287.v2.txt
>
>
> We have seen an issue whereby if the HDFS is unstable and the HBase master's 
> HDFS client is unable to stabilize before 
> {{dfs.client.failover.max.attempts}} then the master's filesystem object 
> closes. This seems to result in an HBase master which will continue to run 
> (process and znode exists) but no meaningful work can be done (e.g. assigning 
> meta).What we saw in our HBase master logs was:{code}2016-12-01 19:19:08,192 
> ERROR org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler: 
> Caught M_META_SERVER_SHUTDOWN, count=1java.io.IOException: failed log 
> splitting for cluster-r5n12.bloomberg.com,60200,1480632863218, will retryat 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.process(MetaServerShutdownHandler.java:84)at
>  org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)at
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)at
>  java.lang.Thread.run(Thread.java:745)Caused by: java.io.IOException: 
> Filesystem closed{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-17831) Support small scan in thrift2

2017-03-27 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HBASE-17831:
--

Assignee: Guangxu Cheng

> Support small scan in thrift2
> -
>
> Key: HBASE-17831
> URL: https://issues.apache.org/jira/browse/HBASE-17831
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Attachments: HBASE-17831-branch-1.patch, HBASE-17831-master.patch, 
> HBASE-17831-master-v1.patch
>
>
> Support small scan in thrift2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17831) Support small scan in thrift2

2017-03-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943673#comment-15943673
 ] 

Ted Yu commented on HBASE-17831:


Please submit branch-1 patch for QA run.

> Support small scan in thrift2
> -
>
> Key: HBASE-17831
> URL: https://issues.apache.org/jira/browse/HBASE-17831
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Attachments: HBASE-17831-branch-1.patch, HBASE-17831-master.patch, 
> HBASE-17831-master-v1.patch
>
>
> Support small scan in thrift2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-13395) Remove HTableInterface

2017-03-27 Thread Jan Hentschel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Hentschel updated HBASE-13395:
--
Release Note: HTableInterface was deprecated in 0.21.0 and is removed in 
2.0.0. Use org.apache.hadoop.hbase.client.Table instead.

> Remove HTableInterface
> --
>
> Key: HBASE-13395
> URL: https://issues.apache.org/jira/browse/HBASE-13395
> Project: HBase
>  Issue Type: Sub-task
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Mikhail Antonov
>Assignee: Jan Hentschel
> Fix For: 2.0.0
>
> Attachments: HBASE-13395.master.001.patch, 
> HBASE-13395.master.002.patch, HBASE-13395.master.003.patch, 
> HBASE-13395.master.004.patch
>
>
> This class is marked as deprecated, probably can remove it, and if any 
> methods from this specific class are in active use, need to decide what to do 
> on callers' side. Should be able to replace with just Table interface usage.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16469) Several log refactoring/improvement suggestions

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943726#comment-15943726
 ] 

Hadoop QA commented on HBASE-16469:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 26s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 26s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 28s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 28s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 24s 
{color} | {color:red} The patch causes 16 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 31s 
{color} | {color:red} The patch causes 16 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 41s 
{color} | {color:red} The patch causes 16 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 51s 
{color} | {color:red} The patch causes 16 errors with Hadoop v2.6.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 2s 
{color} | {color:red} The patch causes 16 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 10s 
{color} | {color:red} The patch causes 16 errors with Hadoop v2.7.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 8m 17s 
{color} | {color:red} The patch causes 16 errors with Hadoop v2.7.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 9m 25s 
{color} | {color:red} The patch causes 16 errors with Hadoop v2.7.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 32s 
{color} | {color:red} The patch causes 16 errors with Hadoop v3.0.0-alpha2. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 16s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 14s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 25s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s 
{color} | {color:green} hbase-it in the patch pas

[jira] [Commented] (HBASE-17831) Support small scan in thrift2

2017-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943741#comment-15943741
 ] 

Hudson commented on HBASE-17831:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2750 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2750/])
HBASE-17831 Support small scan in thrift2 (Guangxu Cheng) (tedyu: rev 
85fda44179c0afba74f52944ae9bb5a38266678c)
* (edit) 
hbase-thrift/src/main/resources/org/apache/hadoop/hbase/thrift2/hbase.thrift
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/ThriftUtilities.java
* (add) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TReadType.java
* (edit) 
hbase-thrift/src/test/java/org/apache/hadoop/hbase/thrift2/TestThriftHBaseServiceHandler.java
* (edit) 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/generated/TScan.java


> Support small scan in thrift2
> -
>
> Key: HBASE-17831
> URL: https://issues.apache.org/jira/browse/HBASE-17831
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
> Attachments: HBASE-17831-branch-1.patch, HBASE-17831-master.patch, 
> HBASE-17831-master-v1.patch
>
>
> Support small scan in thrift2



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17835) Spelling mistakes in the Java source

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943751#comment-15943751
 ] 

Hadoop QA commented on HBASE-17835:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
16s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 22s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 31s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 104m 21s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
49s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 158m 18s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | org.apache.hadoop.hbase.client.TestSplitOrMergeStatus 
|
|   | org.apache.hadoop.hbase.client.TestHCM |
|   | org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860610/HBASE-17835-001.patch 
|
| JIRA Issue | HBASE-17835 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux c34858b7be81 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / c77e213 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6232/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://buil

[jira] [Commented] (HBASE-14417) Incremental backup and bulk loading

2017-03-27 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943762#comment-15943762
 ] 

Ted Yu commented on HBASE-14417:


[~vrodionov]:
Do you have other comment ?

> Incremental backup and bulk loading
> ---
>
> Key: HBASE-14417
> URL: https://issues.apache.org/jira/browse/HBASE-14417
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>Priority: Blocker
>  Labels: backup
> Fix For: 2.0
>
> Attachments: 14417-tbl-ext.v10.txt, 14417-tbl-ext.v11.txt, 
> 14417-tbl-ext.v14.txt, 14417-tbl-ext.v18.txt, 14417-tbl-ext.v19.txt, 
> 14417-tbl-ext.v20.txt, 14417-tbl-ext.v21.txt, 14417-tbl-ext.v22.txt, 
> 14417-tbl-ext.v23.txt, 14417-tbl-ext.v9.txt, 14417.v11.txt, 14417.v13.txt, 
> 14417.v1.txt, 14417.v21.txt, 14417.v23.txt, 14417.v24.txt, 14417.v25.txt, 
> 14417.v2.txt, 14417.v6.txt
>
>
> Currently, incremental backup is based on WAL files. Bulk data loading 
> bypasses WALs for obvious reasons, breaking incremental backups. The only way 
> to continue backups after bulk loading is to create new full backup of a 
> table. This may not be feasible for customers who do bulk loading regularly 
> (say, every day).
> Here is the review board (out of date):
> https://reviews.apache.org/r/54258/
> In order not to miss the hfiles which are loaded into region directories in a 
> situation where postBulkLoadHFile() hook is not called (bulk load being 
> interrupted), we record hfile names thru preCommitStoreFile() hook.
> At time of incremental backup, we check the presence of such hfiles. If they 
> are present, they become part of the incremental backup image.
> Here is review board:
> https://reviews.apache.org/r/57790/
> Google doc for design:
> https://docs.google.com/document/d/1ACCLsecHDvzVSasORgqqRNrloGx4mNYIbvAU7lq5lJE



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16780) Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit

2017-03-27 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943766#comment-15943766
 ] 

Enis Soztutar commented on HBASE-16780:
---

+1. 

> Since move to protobuf3.1, Cells are limited to 64MB where previous they had 
> no limit
> -
>
> Key: HBASE-16780
> URL: https://issues.apache.org/jira/browse/HBASE-16780
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Attachments: HBASE-16780.master.001.patch, 
> HBASE-16780.master.002.patch
>
>
> Change in protobuf behavior noticed by [~mbertozzi]. His test 
> TestStressWALProcedureStore#testEntrySizeLimit keeps upping size we write and 
> he found that now we are bound at 64MB. Digging, yeah, there is a check in 
> place that was not there before. Filed 
> https://github.com/grpc/grpc-java/issues/2324 but making issue here in 
> meantime in case we have to note a change-in-behavior in hbase-2.0.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17287) Master becomes a zombie if filesystem object closes

2017-03-27 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943768#comment-15943768
 ] 

Enis Soztutar commented on HBASE-17287:
---

bq. Once the meta server was killed, I observed the following in master log
Sounds good. Is there an easy way to unit test this? Start a mini cluster + 
hdfs, and use hdfs admin to put NN in safe mode, wait until master aborts 
maybe? 

> Master becomes a zombie if filesystem object closes
> ---
>
> Key: HBASE-17287
> URL: https://issues.apache.org/jira/browse/HBASE-17287
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: Clay B.
>Assignee: Ted Yu
> Fix For: 1.4.0, 2.0
>
> Attachments: 17287.branch-1.v3.txt, 17287.master.v2.txt, 
> 17287.master.v3.txt, 17287.v2.txt
>
>
> We have seen an issue whereby if the HDFS is unstable and the HBase master's 
> HDFS client is unable to stabilize before 
> {{dfs.client.failover.max.attempts}} then the master's filesystem object 
> closes. This seems to result in an HBase master which will continue to run 
> (process and znode exists) but no meaningful work can be done (e.g. assigning 
> meta).What we saw in our HBase master logs was:{code}2016-12-01 19:19:08,192 
> ERROR org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler: 
> Caught M_META_SERVER_SHUTDOWN, count=1java.io.IOException: failed log 
> splitting for cluster-r5n12.bloomberg.com,60200,1480632863218, will retryat 
> org.apache.hadoop.hbase.master.handler.MetaServerShutdownHandler.process(MetaServerShutdownHandler.java:84)at
>  org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:129)at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)at
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)at
>  java.lang.Thread.run(Thread.java:745)Caused by: java.io.IOException: 
> Filesystem closed{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16775) Flakey test with TestExportSnapshot#testExportRetry and TestMobExportSnapshot#testExportRetry

2017-03-27 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-16775:
-
Attachment: HBASE-16775.master.004.patch

> Flakey test with TestExportSnapshot#testExportRetry and 
> TestMobExportSnapshot#testExportRetry 
> --
>
> Key: HBASE-16775
> URL: https://issues.apache.org/jira/browse/HBASE-16775
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: disable.patch, HBASE-16775.master.001.patch, 
> HBASE-16775.master.002.patch, HBASE-16775.master.003.patch, 
> HBASE-16775.master.004.patch
>
>
> The root cause is that conf.setInt("mapreduce.map.maxattempts", 10) is not 
> taken by the mapper job, so the retry is actually 0. Debugging to see why 
> this is the case.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17287) Master becomes a zombie if filesystem object closes

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943787#comment-15943787
 ] 

Hadoop QA commented on HBASE-17287:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 4s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 4s 
{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
56s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
57s {color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} branch-1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 57s 
{color} | {color:red} hbase-server in branch-1 has 2 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} branch-1 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} branch-1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
4s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
15m 11s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 10s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 118m 51s {color} 
| {color:black} {c

[jira] [Commented] (HBASE-17771) [C++] Classes required for implementation of BatchCallerBuilder

2017-03-27 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943794#comment-15943794
 ] 

Enis Soztutar commented on HBASE-17771:
---

bq. Added mutexes in case of writes. For reads, we are not using mutexes as of 
now as all the methods are called by const objects. Further we are returning a 
const reference so no changes will happen this way.

According to 
https://stackoverflow.com/questions/10568969/c-stl-vector-iterator-vs-indexes-access-and-thread-safety
 and 
https://twiki.cern.ch/twiki/bin/view/CMSPublic/FWMultithreadedThreadSafeDataStructures
 this is not true. 
Anyway, there are two approaches we can take: 
 - Use a mutex at the Caller level. We can address perf problems if it becomes 
a problem. 
 - Use concurrent maps from the Intels TBB library (follys concurrent maps are 
extremely limiting). 

You can change this to VLOG(3) or something: 
{code}
LOG(INFO) << "GetResults:" << multi_resp->ShortDebugString();
{code}
Otherwise the rest looks pretty good. 

> [C++] Classes required for implementation of BatchCallerBuilder
> ---
>
> Key: HBASE-17771
> URL: https://issues.apache.org/jira/browse/HBASE-17771
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Fix For: HBASE-14850
>
> Attachments: HBASE-17771.HBASE-14850.v1.patch, 
> HBASE-17771.HBASE-14850.v2.patch, HBASE-17771.HBASE-14850.v3.patch, 
> HBASE-17771.HBASE-14850.v4.patch
>
>
> Separating depedencies of BatchCallerBuilder.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17524) HBase 1.3.1 release

2017-03-27 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943803#comment-15943803
 ] 

Mikhail Antonov commented on HBASE-17524:
-

Thanks [~carp84]! I think yeah, most of the jiras marked for the release are 
closed, and the rest probably could be easily rolled over to 1.3.2. So i'm 
going to look closer to current status of 1.3.1 and get first RC out this or 
next week.

In terms of the plan, I hope that to be a much smaller effort then 1.3.0 (in 
terms of number of changes that went in). 

> HBase 1.3.1 release
> ---
>
> Key: HBASE-17524
> URL: https://issues.apache.org/jira/browse/HBASE-17524
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Mikhail Antonov
>
> Let's have this umbrella jira to track backports and bufgixes that we want to 
> go to 1.3.1 release.
> Please add tasks and comment if you want something to be backported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16780) Since move to protobuf3.1, Cells are limited to 64MB where previous they had no limit

2017-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943816#comment-15943816
 ] 

Hadoop QA commented on HBASE-16780:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
32m 33s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 128m 46s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 173m 31s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12860674/HBASE-16780.master.002.patch
 |
| JIRA Issue | HBASE-16780 |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  compile  |
| uname | Linux 9d05bc3e796b 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / c77e213 |
| Default Java | 1.8.0_121 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6233/testReport/ |
| modules | C: hbase-protocol-shaded hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6233/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Since move to protobuf3.1, Cells are limited to 64MB where previous they had 
> no limit
> -
>
> Key: HBASE-16780
> URL: https://issues.apache.org/jira/browse/HBASE-16780
> Project: HBase
>  Issue Type: Bug
>  Components: Protobufs
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
>Pr

[jira] [Commented] (HBASE-14417) Incremental backup and bulk loading

2017-03-27 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15943830#comment-15943830
 ] 

Vladimir Rodionov commented on HBASE-14417:
---

Yes, Let me do one more round today.

> Incremental backup and bulk loading
> ---
>
> Key: HBASE-14417
> URL: https://issues.apache.org/jira/browse/HBASE-14417
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Ted Yu
>Priority: Blocker
>  Labels: backup
> Fix For: 2.0
>
> Attachments: 14417-tbl-ext.v10.txt, 14417-tbl-ext.v11.txt, 
> 14417-tbl-ext.v14.txt, 14417-tbl-ext.v18.txt, 14417-tbl-ext.v19.txt, 
> 14417-tbl-ext.v20.txt, 14417-tbl-ext.v21.txt, 14417-tbl-ext.v22.txt, 
> 14417-tbl-ext.v23.txt, 14417-tbl-ext.v9.txt, 14417.v11.txt, 14417.v13.txt, 
> 14417.v1.txt, 14417.v21.txt, 14417.v23.txt, 14417.v24.txt, 14417.v25.txt, 
> 14417.v2.txt, 14417.v6.txt
>
>
> Currently, incremental backup is based on WAL files. Bulk data loading 
> bypasses WALs for obvious reasons, breaking incremental backups. The only way 
> to continue backups after bulk loading is to create new full backup of a 
> table. This may not be feasible for customers who do bulk loading regularly 
> (say, every day).
> Here is the review board (out of date):
> https://reviews.apache.org/r/54258/
> In order not to miss the hfiles which are loaded into region directories in a 
> situation where postBulkLoadHFile() hook is not called (bulk load being 
> interrupted), we record hfile names thru preCommitStoreFile() hook.
> At time of incremental backup, we check the presence of such hfiles. If they 
> are present, they become part of the incremental backup image.
> Here is review board:
> https://reviews.apache.org/r/57790/
> Google doc for design:
> https://docs.google.com/document/d/1ACCLsecHDvzVSasORgqqRNrloGx4mNYIbvAU7lq5lJE



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (HBASE-17830) [C++] Test Util support for standlone HBase instance

2017-03-27 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-17830.
---
   Resolution: Fixed
Fix Version/s: HBASE-14850

Pushed this to branch. Thanks Sudeep. 

> [C++] Test Util support for standlone HBase instance
> 
>
> Key: HBASE-17830
> URL: https://issues.apache.org/jira/browse/HBASE-17830
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Fix For: HBASE-14850
>
> Attachments: HBASE-17830.HBASE-14850.v1.patch
>
>
> Running standalone instance was removed from TestUtil after introduction of 
> mini cluster. We are re-introducing methods to run a standalone instance if 
> reqd.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-17750) Update RS Chore that computes Region sizes to avoid double-counting on rematerialized tables

2017-03-27 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned HBASE-17750:
--

Assignee: Josh Elser

> Update RS Chore that computes Region sizes to avoid double-counting on 
> rematerialized tables
> 
>
> Key: HBASE-17750
> URL: https://issues.apache.org/jira/browse/HBASE-17750
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
>
> When a table is restored from a snapshot, it will reference files that are 
> also referenced by the Snapshot (and potentially the source table). We need 
> to make sure that these restored tables do not also "count" the size of those 
> files as it would make the actual FS utilization incorrect.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >