[jira] [Commented] (HBASE-18212) In Standalone mode with local filesystem HBase logs Warning message:Failed to invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream

2017-06-15 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051424#comment-16051424
 ] 

Umesh Agashe commented on HBASE-18212:
--

+1 lgtm

> In Standalone mode with local filesystem HBase logs Warning message:Failed to 
> invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream
> 
>
> Key: HBASE-18212
> URL: https://issues.apache.org/jira/browse/HBASE-18212
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Umesh Agashe
> Attachments: HBASE-18212.patch
>
>
> New users may get nervous after seeing following warning level log messages 
> (considering new users will most likely run HBase in Standalone mode first):
> {code}
> WARN  [MemStoreFlusher.1] io.FSDataInputStreamWrapper: Failed to invoke 
> 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream . So 
> there may be a TCP socket connection left open in CLOSE_WAIT state.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18225) Fix findbugs regression calling toString() on an array

2017-06-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051410#comment-16051410
 ] 

stack commented on HBASE-18225:
---

Thanks [~elserj] for cleaning up my mess. +1.

> Fix findbugs regression calling toString() on an array
> --
>
> Key: HBASE-18225
> URL: https://issues.apache.org/jira/browse/HBASE-18225
> Project: HBase
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Trivial
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18225.001.patch
>
>
> Looks like we got a findbugs warning as a result of HBASE-18166
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
>  
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
> index 1d04944250..b7e0244aa2 100644
> --- 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
> +++ 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
> @@ -2807,8 +2807,8 @@ public class RSRpcServices implements 
> HBaseRPCErrorHandler,
>  HRegionInfo hri = rsh.s.getRegionInfo();
>  // Yes, should be the same instance
>  if (regionServer.getOnlineRegion(hri.getRegionName()) != rsh.r) {
> -  String msg = "Region was re-opened after the scanner" + scannerName + 
> " was created: "
> -  + hri.getRegionNameAsString();
> +  String msg = "Region has changed on the scanner " + scannerName + ": 
> regionName="
> +  + hri.getRegionName() + ", scannerRegionName=" + rsh.r;
> {code}
> Looks like {{hri.getRegionNameAsString()}} was unintentionally changed to 
> {{hri.getRegionName()}}, [~syuanjiang]/[~stack]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18220) Compaction scanners need not reopen storefile scanners while trying to switch over from pread to stream

2017-06-15 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-18220:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master and branch-2. Thanks for the reviews [~stack] and [~ted_yu].

> Compaction scanners need not reopen storefile scanners while trying to switch 
> over from pread to stream
> ---
>
> Key: HBASE-18220
> URL: https://issues.apache.org/jira/browse/HBASE-18220
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 3.0.0, 2.0.0-alpha-1
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18220.patch
>
>
> We try switch over to stream scanner if we have read more than a certain 
> number of bytes. In case of compaction we already have stream based scanners 
> only and but on calling shipped() we try to again close and reopen the 
> scanners which is unwanted. 
> [~Apache9]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18144) Forward-port the old exclusive row lock; there are scenarios where it performs better

2017-06-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18144:
--
Attachment: HBASE-18144.master.001.patch

> Forward-port the old exclusive row lock; there are scenarios where it 
> performs better
> -
>
> Key: HBASE-18144
> URL: https://issues.apache.org/jira/browse/HBASE-18144
> Project: HBase
>  Issue Type: Bug
>  Components: Increment
>Affects Versions: 1.2.5
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0, 1.3.2, 1.2.7
>
> Attachments: DisorderedBatchAndIncrementUT.patch, 
> HBASE-18144.master.001.patch
>
>
> Description to follow.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18220) Compaction scanners need not reopen storefile scanners while trying to switch over from pread to stream

2017-06-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051397#comment-16051397
 ] 

ramkrishna.s.vasudevan commented on HBASE-18220:


bq.On why my name no longer pops up in JIRA, I think its because I got banned 
because of all the bad code I've done down through the years! (Not sure!)
I thought the other way. JIRA decides not to disturb you every time unless 
something very important for all the hard work that you have done through the 
years.

> Compaction scanners need not reopen storefile scanners while trying to switch 
> over from pread to stream
> ---
>
> Key: HBASE-18220
> URL: https://issues.apache.org/jira/browse/HBASE-18220
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 3.0.0, 2.0.0-alpha-1
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18220.patch
>
>
> We try switch over to stream scanner if we have read more than a certain 
> number of bytes. In case of compaction we already have stream based scanners 
> only and but on calling shipped() we try to again close and reopen the 
> scanners which is unwanted. 
> [~Apache9]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18212) In Standalone mode with local filesystem HBase logs Warning message:Failed to invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream

2017-06-15 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-18212:
--
Affects Version/s: 1.4.0
   Status: Patch Available  (was: Open)

> In Standalone mode with local filesystem HBase logs Warning message:Failed to 
> invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream
> 
>
> Key: HBASE-18212
> URL: https://issues.apache.org/jira/browse/HBASE-18212
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.4.0
>Reporter: Umesh Agashe
> Attachments: HBASE-18212.patch
>
>
> New users may get nervous after seeing following warning level log messages 
> (considering new users will most likely run HBase in Standalone mode first):
> {code}
> WARN  [MemStoreFlusher.1] io.FSDataInputStreamWrapper: Failed to invoke 
> 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream . So 
> there may be a TCP socket connection left open in CLOSE_WAIT state.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18212) In Standalone mode with local filesystem HBase logs Warning message:Failed to invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream

2017-06-15 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-18212:
--
Attachment: HBASE-18212.patch

Attached patch, moving log level from warn to trace.

> In Standalone mode with local filesystem HBase logs Warning message:Failed to 
> invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream
> 
>
> Key: HBASE-18212
> URL: https://issues.apache.org/jira/browse/HBASE-18212
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Umesh Agashe
> Attachments: HBASE-18212.patch
>
>
> New users may get nervous after seeing following warning level log messages 
> (considering new users will most likely run HBase in Standalone mode first):
> {code}
> WARN  [MemStoreFlusher.1] io.FSDataInputStreamWrapper: Failed to invoke 
> 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream . So 
> there may be a TCP socket connection left open in CLOSE_WAIT state.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18220) Compaction scanners need not reopen storefile scanners while trying to switch over from pread to stream

2017-06-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051385#comment-16051385
 ] 

stack commented on HBASE-18220:
---

Ok on the above reasoning. Thanks.

On why my name no longer pops up in JIRA, I think its because I got banned 
because of all the bad code I've done down through the years! (Not sure!)

> Compaction scanners need not reopen storefile scanners while trying to switch 
> over from pread to stream
> ---
>
> Key: HBASE-18220
> URL: https://issues.apache.org/jira/browse/HBASE-18220
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 3.0.0, 2.0.0-alpha-1
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18220.patch
>
>
> We try switch over to stream scanner if we have read more than a certain 
> number of bytes. In case of compaction we already have stream based scanners 
> only and but on calling shipped() we try to again close and reopen the 
> scanners which is unwanted. 
> [~Apache9]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18104) [AMv2] Enable aggregation of RPCs (assigns/unassigns, etc.)

2017-06-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18104:
--
Attachment: HBASE-18104.master.001.patch

Retry

> [AMv2] Enable aggregation of RPCs (assigns/unassigns, etc.)
> ---
>
> Key: HBASE-18104
> URL: https://issues.apache.org/jira/browse/HBASE-18104
> Project: HBase
>  Issue Type: Sub-task
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: HBASE-18104.master.001.patch, 
> HBASE-18104.master.001.patch
>
>
> Machinery is in place to coalesce AMv2 RPCs (Assigns, Unassigns). It needs 
> enabling and verification. From '6.3 We don’t do the aggregating of Assigns' 
> of 
> https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.uuwvci2r2tz4



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18213) Add documentation about the new async client

2017-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051372#comment-16051372
 ] 

Hadoop QA commented on HBASE-18213:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 9s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha3. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 127m 22s 
{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 169m 39s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.coprocessor.TestCoprocessorMetrics |
|   | hadoop.hbase.master.procedure.TestMasterProcedureWalLease |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873204/HBASE-18213-v1.patch |
| JIRA Issue | HBASE-18213 |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux e0f1b73bd383 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / dd1d81e |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7198/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/7198/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7198/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7198/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18213.patch, HBASE-18213-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18220) Compaction scanners need not reopen storefile scanners while trying to switch over from pread to stream

2017-06-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051358#comment-16051358
 ] 

ramkrishna.s.vasudevan commented on HBASE-18220:


[~stack]
Out of the box question. Why does your name not popup in the dropdown when we 
try to use the '@' prefix ?

> Compaction scanners need not reopen storefile scanners while trying to switch 
> over from pread to stream
> ---
>
> Key: HBASE-18220
> URL: https://issues.apache.org/jira/browse/HBASE-18220
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 3.0.0, 2.0.0-alpha-1
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18220.patch
>
>
> We try switch over to stream scanner if we have read more than a certain 
> number of bytes. In case of compaction we already have stream based scanners 
> only and but on calling shipped() we try to again close and reopen the 
> scanners which is unwanted. 
> [~Apache9]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18220) Compaction scanners need not reopen storefile scanners while trying to switch over from pread to stream

2017-06-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051357#comment-16051357
 ] 

ramkrishna.s.vasudevan commented on HBASE-18220:


Thanks for the review [~stack]
bq.The new param should be in ScanInfo rather than as a new parameter on the 
constructor?
The scanInfo is an immutable information to be used by all scans pertaining to 
that HStore. This ScanType is rather a type decided on the fly. Also I am not 
passing any new parameter. The scanType is already available with the public 
StoreScanner Constructor and am just using the same to be passed to an internal 
StoreScanner constructor.

> Compaction scanners need not reopen storefile scanners while trying to switch 
> over from pread to stream
> ---
>
> Key: HBASE-18220
> URL: https://issues.apache.org/jira/browse/HBASE-18220
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 3.0.0, 2.0.0-alpha-1
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18220.patch
>
>
> We try switch over to stream scanner if we have read more than a certain 
> number of bytes. In case of compaction we already have stream based scanners 
> only and but on calling shipped() we try to again close and reopen the 
> scanners which is unwanted. 
> [~Apache9]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18225) Fix findbugs regression calling toString() on an array

2017-06-15 Thread Stephen Yuan Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051356#comment-16051356
 ] 

Stephen Yuan Jiang commented on HBASE-18225:


Looks good to me.  

> Fix findbugs regression calling toString() on an array
> --
>
> Key: HBASE-18225
> URL: https://issues.apache.org/jira/browse/HBASE-18225
> Project: HBase
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Trivial
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18225.001.patch
>
>
> Looks like we got a findbugs warning as a result of HBASE-18166
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
>  
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
> index 1d04944250..b7e0244aa2 100644
> --- 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
> +++ 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
> @@ -2807,8 +2807,8 @@ public class RSRpcServices implements 
> HBaseRPCErrorHandler,
>  HRegionInfo hri = rsh.s.getRegionInfo();
>  // Yes, should be the same instance
>  if (regionServer.getOnlineRegion(hri.getRegionName()) != rsh.r) {
> -  String msg = "Region was re-opened after the scanner" + scannerName + 
> " was created: "
> -  + hri.getRegionNameAsString();
> +  String msg = "Region has changed on the scanner " + scannerName + ": 
> regionName="
> +  + hri.getRegionName() + ", scannerRegionName=" + rsh.r;
> {code}
> Looks like {{hri.getRegionNameAsString()}} was unintentionally changed to 
> {{hri.getRegionName()}}, [~syuanjiang]/[~stack]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18225) Fix findbugs regression calling toString() on an array

2017-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051343#comment-16051343
 ] 

Hadoop QA commented on HBASE-18225:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 39s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 56s 
{color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 24s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha3. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} hbase-server generated 0 new + 0 unchanged - 1 
fixed = 0 total (was 1) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 131m 8s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 175m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.assignment.TestAssignmentManager |
| Timed out junit tests | 
org.apache.hadoop.hbase.coprocessor.TestCoprocessorMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873201/HBASE-18225.001.patch 
|
| JIRA Issue | HBASE-18225 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 20ad48a682d7 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / dd1d81e |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7197/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7197/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/7197/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 

[jira] [Commented] (HBASE-18133) Low-latency space quota size reports

2017-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051330#comment-16051330
 ] 

Hadoop QA commented on HBASE-18133:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 12 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 43s 
{color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 38s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha3. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s 
{color} | {color:green} hbase-hadoop-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 24s 
{color} | {color:green} hbase-hadoop2-compat in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 21m 48s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
32s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 23s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestKeepDeletes |
|   | hadoop.hbase.io.encoding.TestSeekBeforeWithReverseScan |
|   | hadoop.hbase.regionserver.TestScanWithBloomError |
|   | hadoop.hbase.filter.TestDependentColumnFilter |
|   | hadoop.hbase.regionserver.TestMinVersions |
|   | hadoop.hbase.regionserver.TestScanner |
|   | hadoop.hbase.regionserver.TestStoreFileRefresherChore |
|   | hadoop.hbase.coprocessor.TestRegionObserverStacking |
|   | hadoop.hbase.regionserver.TestBlocksScanned |
|   | hadoop.hbase.regionserver.TestResettingCounters |
|   | hadoop.hbase.coprocessor.TestCoprocessorInterface |
|   | hadoop.hbase.filter.TestFilter |
|   | hadoop.hbase.filter.TestFilterFromRegionSide |
|   | hadoop.hbase.io.encoding.TestPrefixTree |
|   | hadoop.hbase.client.TestIntraRowPagination |
|   | hadoop.hbase.filter.TestInvocationRecordFilter |
|   | 

[jira] [Commented] (HBASE-17840) Update book

2017-06-15 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17840?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051318#comment-16051318
 ] 

Mike Drob commented on HBASE-17840:
---

Patch v2 has several places where it is changing end of line whitespace. I'm 
all for cleaning it up, but thought I'd point it out in case you wanted to keep 
your changeset minimal.

Unclear what happens if a snapshot is taken, a new table materialized, and the 
original table deleted. I expect the new table to own the file, but I'm not 
sure any longer.

LGTM otherwise.

Maybe a diagram would be helpful, but that sounds pretty difficult so I 
wouldn't worry about it yet.

> Update book
> ---
>
> Key: HBASE-17840
> URL: https://issues.apache.org/jira/browse/HBASE-17840
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HBASE-17840.001.patch, HBASE-17840.002.patch
>
>
> Need to update the book to include the new feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17752) Update reporting RPCs/Shell commands to break out space utilization by snapshot

2017-06-15 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051315#comment-16051315
 ] 

Mike Drob commented on HBASE-17752:
---

{code}
+  final Map snapshotSizes = new HashMap<>();
{code}
Nit: Can we estimate the size of this map before construction?

{code}
+assertEquals(actualInitialSize, size.longValue());
{code}
Good to have a message here describing the failure. (Both places)


Would be nice to have a test for the new shell command.

> Update reporting RPCs/Shell commands to break out space utilization by 
> snapshot
> ---
>
> Key: HBASE-17752
> URL: https://issues.apache.org/jira/browse/HBASE-17752
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HBASE-17752.001.patch, HBASE-17752.002.patch, 
> HBASE-17752.003.patch
>
>
> For adminstrators running HBase with space quotas, it is useful to provide a 
> breakdown of the utilization of a table. For example, it may be non-intuitive 
> that a table's utilization is primarily made up of snapshots. We should 
> provide a new command or modify existing commands such that an admin can see 
> the utilization for a table/ns:
> e.g.
> {noformat}
> table1:   17GB
>   resident:   10GB
>   snapshot_a: 5GB
>   snapshot_b: 2GB
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17752) Update reporting RPCs/Shell commands to break out space utilization by snapshot

2017-06-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051311#comment-16051311
 ] 

Ted Yu commented on HBASE-17752:


{code}
+  public static Scan createScanForSpaceSnapshotSizes(TableName table) {
{code}
The above method can be package private.

The rest looks good.

> Update reporting RPCs/Shell commands to break out space utilization by 
> snapshot
> ---
>
> Key: HBASE-17752
> URL: https://issues.apache.org/jira/browse/HBASE-17752
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HBASE-17752.001.patch, HBASE-17752.002.patch, 
> HBASE-17752.003.patch
>
>
> For adminstrators running HBase with space quotas, it is useful to provide a 
> breakdown of the utilization of a table. For example, it may be non-intuitive 
> that a table's utilization is primarily made up of snapshots. We should 
> provide a new command or modify existing commands such that an admin can see 
> the utilization for a table/ns:
> e.g.
> {noformat}
> table1:   17GB
>   resident:   10GB
>   snapshot_a: 5GB
>   snapshot_b: 2GB
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18133) Low-latency space quota size reports

2017-06-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18133:
---
Attachment: HBASE-18133.001.patch

.001 which was a nice little change in the end.

Still requires HBASE-17752 and HBASE-17840 to land.

> Low-latency space quota size reports
> 
>
> Key: HBASE-18133
> URL: https://issues.apache.org/jira/browse/HBASE-18133
> Project: HBase
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 3.0.0
>
> Attachments: HBASE-18133.001.patch
>
>
> Presently space quota enforcement relies on RegionServers sending reports to 
> the master about each Region that they host. This is done by periodically, 
> reading the cached size of each HFile in each Region (which was ultimately 
> computed from HDFS).
> This means that the Master is unaware of Region size growth until the the 
> next time this chore in a RegionServer fires which is a fair amount of 
> latency (a few minutes, by default). Operations like flushes, compactions, 
> and bulk-loads are delayed even though the RegionServer is running those 
> operations locally.
> Instead, we can create an API which these operations could invoke that would 
> automatically update the size of the Region being operated on. For example, a 
> successful flush can report that the size of a Region increased by the size 
> of the flush. A compaction can subtract the size of the input files of the 
> compaction and add in the size of the resulting file.
> This de-couples the computation of a Region's size from sending the Region 
> sizes to the Master, allowing us to send reports more frequently, increasing 
> the responsiveness of the cluster to size changes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18133) Low-latency space quota size reports

2017-06-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18133:
---
Status: Patch Available  (was: Open)

> Low-latency space quota size reports
> 
>
> Key: HBASE-18133
> URL: https://issues.apache.org/jira/browse/HBASE-18133
> Project: HBase
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 3.0.0
>
> Attachments: HBASE-18133.001.patch
>
>
> Presently space quota enforcement relies on RegionServers sending reports to 
> the master about each Region that they host. This is done by periodically, 
> reading the cached size of each HFile in each Region (which was ultimately 
> computed from HDFS).
> This means that the Master is unaware of Region size growth until the the 
> next time this chore in a RegionServer fires which is a fair amount of 
> latency (a few minutes, by default). Operations like flushes, compactions, 
> and bulk-loads are delayed even though the RegionServer is running those 
> operations locally.
> Instead, we can create an API which these operations could invoke that would 
> automatically update the size of the Region being operated on. For example, a 
> successful flush can report that the size of a Region increased by the size 
> of the flush. A compaction can subtract the size of the input files of the 
> compaction and add in the size of the resulting file.
> This de-couples the computation of a Region's size from sending the Region 
> sizes to the Master, allowing us to send reports more frequently, increasing 
> the responsiveness of the cluster to size changes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18133) Low-latency space quota size reports

2017-06-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18133:
---
Fix Version/s: 3.0.0

> Low-latency space quota size reports
> 
>
> Key: HBASE-18133
> URL: https://issues.apache.org/jira/browse/HBASE-18133
> Project: HBase
>  Issue Type: Improvement
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 3.0.0
>
> Attachments: HBASE-18133.001.patch
>
>
> Presently space quota enforcement relies on RegionServers sending reports to 
> the master about each Region that they host. This is done by periodically, 
> reading the cached size of each HFile in each Region (which was ultimately 
> computed from HDFS).
> This means that the Master is unaware of Region size growth until the the 
> next time this chore in a RegionServer fires which is a fair amount of 
> latency (a few minutes, by default). Operations like flushes, compactions, 
> and bulk-loads are delayed even though the RegionServer is running those 
> operations locally.
> Instead, we can create an API which these operations could invoke that would 
> automatically update the size of the Region being operated on. For example, a 
> successful flush can report that the size of a Region increased by the size 
> of the flush. A compaction can subtract the size of the input files of the 
> compaction and add in the size of the resulting file.
> This de-couples the computation of a Region's size from sending the Region 
> sizes to the Master, allowing us to send reports more frequently, increasing 
> the responsiveness of the cluster to size changes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18213) Add documentation about the new async client

2017-06-15 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18213:
--
Attachment: HBASE-18213-v1.patch

Missed a section terminator...

> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18213.patch, HBASE-18213-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17752) Update reporting RPCs/Shell commands to break out space utilization by snapshot

2017-06-15 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051277#comment-16051277
 ] 

Josh Elser commented on HBASE-17752:


TestShellRSGroups seems to be hanging due to some of the on-going AMv2 work, 
and TestCoprocessorMetrics has been flapping.

[~tedyu], can you please take a look at v3 at your convenience?

> Update reporting RPCs/Shell commands to break out space utilization by 
> snapshot
> ---
>
> Key: HBASE-17752
> URL: https://issues.apache.org/jira/browse/HBASE-17752
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HBASE-17752.001.patch, HBASE-17752.002.patch, 
> HBASE-17752.003.patch
>
>
> For adminstrators running HBase with space quotas, it is useful to provide a 
> breakdown of the utilization of a table. For example, it may be non-intuitive 
> that a table's utilization is primarily made up of snapshots. We should 
> provide a new command or modify existing commands such that an admin can see 
> the utilization for a table/ns:
> e.g.
> {noformat}
> table1:   17GB
>   resident:   10GB
>   snapshot_a: 5GB
>   snapshot_b: 2GB
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18128) compaction marker could be skipped

2017-06-15 Thread Jingyun Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingyun Tian updated HBASE-18128:
-
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

> compaction marker could be skipped 
> ---
>
> Key: HBASE-18128
> URL: https://issues.apache.org/jira/browse/HBASE-18128
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, regionserver
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Attachments: HBASE-18128-master.patch, HBASE-18128-master-v2.patch, 
> HBASE-18128-master-v3.patch, TestCompactionMarker.java
>
>
> The sequence for a compaction are as follows:
> 1. Compaction writes new files under region/.tmp directory (compaction output)
> 2. Compaction atomically moves the temporary file under region directory
> 3. Compaction appends a WAL edit containing the compaction input and output 
> files. Forces sync on WAL.
> 4. Compaction deletes the input files from the region directory.
> But if a flush happened between 3 and 4, then the regionserver crushed. The 
> compaction marker will be skipped when splitting log because the sequence id 
> of compaction marker is smaller than lastFlushedSequenceId.
> {code}
> if (lastFlushedSequenceId >= entry.getKey().getLogSeqNum()) {
>   editsSkipped++;
>   continue;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18128) compaction marker could be skipped

2017-06-15 Thread Jingyun Tian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051281#comment-16051281
 ] 

Jingyun Tian commented on HBASE-18128:
--

[~allan163] I got your point. That makes sense. Thanks for review.

> compaction marker could be skipped 
> ---
>
> Key: HBASE-18128
> URL: https://issues.apache.org/jira/browse/HBASE-18128
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, regionserver
>Reporter: Jingyun Tian
>Assignee: Jingyun Tian
> Attachments: HBASE-18128-master.patch, HBASE-18128-master-v2.patch, 
> HBASE-18128-master-v3.patch, TestCompactionMarker.java
>
>
> The sequence for a compaction are as follows:
> 1. Compaction writes new files under region/.tmp directory (compaction output)
> 2. Compaction atomically moves the temporary file under region directory
> 3. Compaction appends a WAL edit containing the compaction input and output 
> files. Forces sync on WAL.
> 4. Compaction deletes the input files from the region directory.
> But if a flush happened between 3 and 4, then the regionserver crushed. The 
> compaction marker will be skipped when splitting log because the sequence id 
> of compaction marker is smaller than lastFlushedSequenceId.
> {code}
> if (lastFlushedSequenceId >= entry.getKey().getLogSeqNum()) {
>   editsSkipped++;
>   continue;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18225) Fix findbugs regression calling toString() on an array

2017-06-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18225:
---
Status: Patch Available  (was: Open)

> Fix findbugs regression calling toString() on an array
> --
>
> Key: HBASE-18225
> URL: https://issues.apache.org/jira/browse/HBASE-18225
> Project: HBase
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Trivial
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18225.001.patch
>
>
> Looks like we got a findbugs warning as a result of HBASE-18166
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
>  
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
> index 1d04944250..b7e0244aa2 100644
> --- 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
> +++ 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
> @@ -2807,8 +2807,8 @@ public class RSRpcServices implements 
> HBaseRPCErrorHandler,
>  HRegionInfo hri = rsh.s.getRegionInfo();
>  // Yes, should be the same instance
>  if (regionServer.getOnlineRegion(hri.getRegionName()) != rsh.r) {
> -  String msg = "Region was re-opened after the scanner" + scannerName + 
> " was created: "
> -  + hri.getRegionNameAsString();
> +  String msg = "Region has changed on the scanner " + scannerName + ": 
> regionName="
> +  + hri.getRegionName() + ", scannerRegionName=" + rsh.r;
> {code}
> Looks like {{hri.getRegionNameAsString()}} was unintentionally changed to 
> {{hri.getRegionName()}}, [~syuanjiang]/[~stack]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18225) Fix findbugs regression calling toString() on an array

2017-06-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-18225:
---
Attachment: HBASE-18225.001.patch

.001 trivial fix.

> Fix findbugs regression calling toString() on an array
> --
>
> Key: HBASE-18225
> URL: https://issues.apache.org/jira/browse/HBASE-18225
> Project: HBase
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Trivial
> Fix For: 2.0.0, 3.0.0
>
> Attachments: HBASE-18225.001.patch
>
>
> Looks like we got a findbugs warning as a result of HBASE-18166
> {code}
> diff --git 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
>  
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
> index 1d04944250..b7e0244aa2 100644
> --- 
> a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
> +++ 
> b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
> @@ -2807,8 +2807,8 @@ public class RSRpcServices implements 
> HBaseRPCErrorHandler,
>  HRegionInfo hri = rsh.s.getRegionInfo();
>  // Yes, should be the same instance
>  if (regionServer.getOnlineRegion(hri.getRegionName()) != rsh.r) {
> -  String msg = "Region was re-opened after the scanner" + scannerName + 
> " was created: "
> -  + hri.getRegionNameAsString();
> +  String msg = "Region has changed on the scanner " + scannerName + ": 
> regionName="
> +  + hri.getRegionName() + ", scannerRegionName=" + rsh.r;
> {code}
> Looks like {{hri.getRegionNameAsString()}} was unintentionally changed to 
> {{hri.getRegionName()}}, [~syuanjiang]/[~stack]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18225) Fix findbugs regression calling toString() on an array

2017-06-15 Thread Josh Elser (JIRA)
Josh Elser created HBASE-18225:
--

 Summary: Fix findbugs regression calling toString() on an array
 Key: HBASE-18225
 URL: https://issues.apache.org/jira/browse/HBASE-18225
 Project: HBase
  Issue Type: Bug
Reporter: Josh Elser
Assignee: Josh Elser
Priority: Trivial
 Fix For: 2.0.0, 3.0.0


Looks like we got a findbugs warning as a result of HBASE-18166

{code}
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
index 1d04944250..b7e0244aa2 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
@@ -2807,8 +2807,8 @@ public class RSRpcServices implements 
HBaseRPCErrorHandler,
 HRegionInfo hri = rsh.s.getRegionInfo();
 // Yes, should be the same instance
 if (regionServer.getOnlineRegion(hri.getRegionName()) != rsh.r) {
-  String msg = "Region was re-opened after the scanner" + scannerName + " 
was created: "
-  + hri.getRegionNameAsString();
+  String msg = "Region has changed on the scanner " + scannerName + ": 
regionName="
+  + hri.getRegionName() + ", scannerRegionName=" + rsh.r;
{code}

Looks like {{hri.getRegionNameAsString()}} was unintentionally changed to 
{{hri.getRegionName()}}, [~syuanjiang]/[~stack]?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18166) [AMv2] We are splitting already-split files

2017-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051225#comment-16051225
 ] 

Hudson commented on HBASE-18166:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #3201 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3201/])
HBASE-18166 [AMv2] We are splitting already-split files v2 Address (stack: rev 
f64512bee2454fc3728fe5d344a838781e26)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/IncreasingToUpperBoundRegionSplitPolicy.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
HBASE-18166 [AMv2] We are splitting already-split files v2 Address (stack: rev 
c2eebfdb613427fa3314b7ee13c3b9f34ce4c120)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java


> [AMv2] We are splitting already-split files
> ---
>
> Key: HBASE-18166
> URL: https://issues.apache.org/jira/browse/HBASE-18166
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: HBASE-18166.master.001.patch, 
> HBASE-18166.master.002.patch
>
>
> Interesting issue. The below adds a lag cleaning up files after a compaction 
> in case of on-going Scanners (for read replicas/offheap).
> HBASE-14970 Backport HBASE-13082 and its sub-jira to branch-1 - recommit (Ram)
> What the lag means is that now that split is run from the HMaster in master 
> branch, when it goes to get a listing of the files to split, it can pick up 
> files that are for archiving but that have not been archived yet.  When it 
> does, it goes ahead and splits them... making references of references.
> Its a mess.
> I added asking the Region if it is splittable a while back. The Master calls 
> this from SplitTableRegionProcedure during preparation. If the RegionServer 
> asked for the split, it is sort of redundant work given the RS asks itself if 
> any references still; if any, it'll wait before asking for a split. But if a 
> user/client asks, then this isSplittable over RPC comes in handy.
> I was thinking that isSplittable could return list of files 
> Or, easier, given we know a region is Splittable by the time we go to split 
> the files, then I think master-side we can just skip any references found 
> presuming read-for-archive.
> Will be back with a patch. Want to test on cluster first (Side-effect is 
> regions are offline because file at end of the reference to a reference is 
> removed ... and so the open fails).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17898) Update dependencies

2017-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051226#comment-16051226
 ] 

Hudson commented on HBASE-17898:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #3201 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3201/])
HBASE-17898 Update dependencies (stack: rev 
479f3edc5c4e298d6764feb2759bc0c29f062646)
* (edit) pom.xml
* (edit) hbase-thrift/pom.xml


> Update dependencies
> ---
>
> Key: HBASE-17898
> URL: https://issues.apache.org/jira/browse/HBASE-17898
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Balazs Meszaros
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17898-BM-0001.patch, HBASE-17898-BM-0002.patch, 
> HBASE-17898-BM-0003.patch, HBASE-17898-BM-0004.patch
>
>
> General issue to cover updating old, stale dependencies for hbase2 release. 
> Lets make subissues doing each.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18004) getRegionLocations needs to be called once in ScannerCallableWithReplicas#call()

2017-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051227#comment-16051227
 ] 

Hudson commented on HBASE-18004:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #3201 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3201/])
HBASE-18004 getRegionLocations needs to be called once in (stack: rev 
dd1d81ef5a673deb7fbfbf59c1b51b4f83b1666c)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java


> getRegionLocations  needs to be called once in 
> ScannerCallableWithReplicas#call()
> -
>
> Key: HBASE-18004
> URL: https://issues.apache.org/jira/browse/HBASE-18004
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18004-master-001.patch, 
> HBASE-18004-master-002.patch
>
>
> Look at this line,
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java#L145
> It calls getRegionLocations() to get the primary region's locations. It's 
> usage is to figure out table's region replications. Since table's region 
> replication wont be changed until the table is disabled. It is safe to cache 
> this region replication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18023) Log multi-* requests for more than threshold number of rows

2017-06-15 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051221#comment-16051221
 ] 

Josh Elser commented on HBASE-18023:


{quote}
I added the "(See https://issues.apache.org/jira/browse/HBASE-18023)" reference 
in response to stack's suggestion to add to the log line, "...a pointer to doc 
or issue on why many small batches will go down better than a few massive 
ones". If there's a better doc or issue to reference I can replace it but 
otherwise I can remove the reference altogether.
{quote}

Sorry for giving you conflicting suggestions :D. We could go a bit more 
specific: "A single client is sending large requests", or just trip the URL 
down to "HBASE-18023". Not a big deal either way.

{quote}
there does seem to be precedent in the code for creating methods for testing 
purposes only so I'll go ahead and make those "for testing purposes only" 
public access points (either public methods delivering the logging string or a 
public ctor for RSRpcServices which takes in some kind of logging delegate).
{quote}

Yep, this is pretty common. For hot-codepaths, the JIT will optimize away small 
methods and we won't see any significant performance impact of an extra method. 
You can make the method package-private which gives a decent amount of 
encapsulation while still allowing testing.

> Log multi-* requests for more than threshold number of rows
> ---
>
> Key: HBASE-18023
> URL: https://issues.apache.org/jira/browse/HBASE-18023
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Clay B.
>Assignee: David Harju
>Priority: Minor
> Attachments: HBASE-18023.master.001.patch
>
>
> Today, if a user happens to do something like a large multi-put, they can get 
> through request throttling (e.g. it is one request) but still crash a region 
> server with a garbage storm. We have seen regionservers hit this issue and it 
> is silent and deadly. The RS will report nothing more than a mysterious 
> garbage collection and exit out.
> Ideally, we could report a large multi-* request before starting it, in case 
> it happens to be deadly. Knowing the client, user and how many rows are 
> affected would be a good start to tracking down painful users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18104) [AMv2] Enable aggregation of RPCs (assigns/unassigns, etc.)

2017-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051217#comment-16051217
 ] 

Hadoop QA commented on HBASE-18104:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 34s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 41s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 54s 
{color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 12s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha3. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 112m 4s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
21s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 155m 5s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.coprocessor.TestCoprocessorMetrics |
|   | org.apache.hadoop.hbase.mob.compactions.TestMobCompactor |
|   | org.apache.hadoop.hbase.replication.TestSerialReplication |
|   | org.apache.hadoop.hbase.replication.TestMasterReplication |
|   | org.apache.hadoop.hbase.TestPartialResultsFromClientSide |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873179/HBASE-18104.master.001.patch
 |
| JIRA Issue | HBASE-18104 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux f07dc1a1b1f9 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / dd1d81e |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7196/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7196/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs 

[jira] [Commented] (HBASE-18023) Log multi-* requests for more than threshold number of rows

2017-06-15 Thread David Harju (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051205#comment-16051205
 ] 

David Harju commented on HBASE-18023:
-

Thanks [~elserj]!

Good suggestions, I'll work to implement them and then post a new patch.

I added the "(See https://issues.apache.org/jira/browse/HBASE-18023)" reference 
in response to [~stack]'s suggestion to add to the log line, "...a pointer to 
doc or issue on why many small batches will go down better than a few massive 
ones".  If there's a better doc or issue to reference I can replace it but 
otherwise I can remove the reference altogether.

As for the mocking suggestion at the bottom, I was resistant to expanding / 
creating any new public methods for objects in the patch (which I may need to 
do in order to do the mock verification you suggest), which is why I went with 
the more brittle verification you saw, but there does seem to be precedent in 
the code for creating methods for testing purposes only so I'll go ahead and 
make those "for testing purposes only" public access points (either public 
methods delivering the logging string or a public ctor for RSRpcServices which 
takes in some kind of logging delegate).



> Log multi-* requests for more than threshold number of rows
> ---
>
> Key: HBASE-18023
> URL: https://issues.apache.org/jira/browse/HBASE-18023
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Reporter: Clay B.
>Assignee: David Harju
>Priority: Minor
> Attachments: HBASE-18023.master.001.patch
>
>
> Today, if a user happens to do something like a large multi-put, they can get 
> through request throttling (e.g. it is one request) but still crash a region 
> server with a garbage storm. We have seen regionservers hit this issue and it 
> is silent and deadly. The RS will report nothing more than a mysterious 
> garbage collection and exit out.
> Ideally, we could report a large multi-* request before starting it, in case 
> it happens to be deadly. Knowing the client, user and how many rows are 
> affected would be a good start to tracking down painful users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18218) List replication peers for the cluster

2017-06-15 Thread Ali (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ali updated HBASE-18218:

Status: Open  (was: Patch Available)

> List replication peers for the cluster
> --
>
> Key: HBASE-18218
> URL: https://issues.apache.org/jira/browse/HBASE-18218
> Project: HBase
>  Issue Type: New Feature
>Reporter: Ali
>Assignee: Ali
> Attachments: HBASE-18218.v1.branch-1.2.patch, screenshot-1.png
>
>
> HBase Master page that listed all the replication peers for a cluster, with 
> their associated metadata



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18212) In Standalone mode with local filesystem HBase logs Warning message:Failed to invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream

2017-06-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051163#comment-16051163
 ] 

Andrew Purtell commented on HBASE-18212:


I don't think we need to over-think it. Fine to go to TRACE IMHO

> In Standalone mode with local filesystem HBase logs Warning message:Failed to 
> invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream
> 
>
> Key: HBASE-18212
> URL: https://issues.apache.org/jira/browse/HBASE-18212
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Umesh Agashe
>
> New users may get nervous after seeing following warning level log messages 
> (considering new users will most likely run HBase in Standalone mode first):
> {code}
> WARN  [MemStoreFlusher.1] io.FSDataInputStreamWrapper: Failed to invoke 
> 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream . So 
> there may be a TCP socket connection left open in CLOSE_WAIT state.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-9393) Hbase does not closing a closed socket resulting in many CLOSE_WAIT

2017-06-15 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051162#comment-16051162
 ] 

Mikhail Antonov commented on HBASE-9393:


Missed this one. Yeah, that would be good to get to branch-1.3 [~busbey] - 
thanks!

> Hbase does not closing a closed socket resulting in many CLOSE_WAIT 
> 
>
> Key: HBASE-9393
> URL: https://issues.apache.org/jira/browse/HBASE-9393
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.2, 0.98.0, 1.0.1.1, 1.1.2
> Environment: Centos 6.4 - 7 regionservers/datanodes, 8 TB per node, 
> 7279 regions
>Reporter: Avi Zrachya
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-9393-branch-1.patch, HBASE-9393.patch, 
> HBASE-9393.v10.patch, HBASE-9393.v11.patch, HBASE-9393.v12.patch, 
> HBASE-9393.v13.patch, HBASE-9393.v14.patch, HBASE-9393.v15.patch, 
> HBASE-9393.v15.patch, HBASE-9393.v16.patch, HBASE-9393.v16.patch, 
> HBASE-9393.v17.patch, HBASE-9393.v18.patch, HBASE-9393.v1.patch, 
> HBASE-9393.v2.patch, HBASE-9393.v3.patch, HBASE-9393.v4.patch, 
> HBASE-9393.v5.patch, HBASE-9393.v5.patch, HBASE-9393.v5.patch, 
> HBASE-9393.v6.patch, HBASE-9393.v6.patch, HBASE-9393.v6.patch, 
> HBASE-9393.v7.patch, HBASE-9393.v8.patch, HBASE-9393.v9.patch
>
>
> HBase dose not close a dead connection with the datanode.
> This resulting in over 60K CLOSE_WAIT and at some point HBase can not connect 
> to the datanode because too many mapped sockets from one host to another on 
> the same port.
> The example below is with low CLOSE_WAIT count because we had to restart 
> hbase to solve the porblem, later in time it will incease to 60-100K sockets 
> on CLOSE_WAIT
> [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l
> 13156
> [root@hd2-region3 ~]# ps -ef |grep 21592
> root 17255 17219  0 12:26 pts/000:00:00 grep 21592
> hbase21592 1 17 Aug29 ?03:29:06 
> /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m 
> -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode 
> -Dhbase.log.dir=/var/log/hbase 
> -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18218) List replication peers for the cluster

2017-06-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051161#comment-16051161
 ] 

Ted Yu commented on HBASE-18218:


Please attach patch for master branch ?

> List replication peers for the cluster
> --
>
> Key: HBASE-18218
> URL: https://issues.apache.org/jira/browse/HBASE-18218
> Project: HBase
>  Issue Type: New Feature
>Reporter: Ali
>Assignee: Ali
> Attachments: HBASE-18218.v1.branch-1.2.patch, screenshot-1.png
>
>
> HBase Master page that listed all the replication peers for a cluster, with 
> their associated metadata



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16415) Replication in different namespace

2017-06-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051159#comment-16051159
 ] 

Ted Yu commented on HBASE-16415:


Looking at the existing classes, not all of them start with HBase.
e.g. RegionReplicaReplicationEndpoint

I think RedirectingInterClusterReplicationEndpoint should be good enough.

> Replication in different namespace
> --
>
> Key: HBASE-16415
> URL: https://issues.apache.org/jira/browse/HBASE-16415
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Christian Guegi
>Assignee: Jan Kunigk
>
> It would be nice to replicate tables from one namespace to another namespace.
> Example:
> Master cluster, namespace=default, table=bar
> Slave cluster, namespace=dr, table=bar
> Replication happens in class ReplicationSink:
>   public void replicateEntries(List entries, final CellScanner 
> cells, ...){
> ...
> TableName table = 
> TableName.valueOf(entry.getKey().getTableName().toByteArray());
> ...
> addToHashMultiMap(rowMap, table, clusterIds, m);
> ...
> for (Entry> entry : 
> rowMap.entrySet()) {
>   batch(entry.getKey(), entry.getValue().values());
> }
>}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17752) Update reporting RPCs/Shell commands to break out space utilization by snapshot

2017-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051156#comment-16051156
 ] 

Hadoop QA commented on HBASE-17752:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 1s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 1s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 47s 
{color} | {color:red} hbase-server in master has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 58s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha3. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 26s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 113m 25s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 25s {color} 
| {color:red} hbase-shell in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
48s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 178m 41s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.coprocessor.TestCoprocessorMetrics |
| Timed out junit tests | 
org.apache.hadoop.hbase.client.rsgroup.TestShellRSGroups |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873170/HBASE-17752.003.patch 
|
| JIRA Issue | HBASE-17752 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  rubocop  ruby_lint  |
| uname | Linux f12cd44487ad 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 

[jira] [Commented] (HBASE-18166) [AMv2] We are splitting already-split files

2017-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051141#comment-16051141
 ] 

Hudson commented on HBASE-18166:


FAILURE: Integrated in Jenkins build HBase-2.0 #49 (See 
[https://builds.apache.org/job/HBase-2.0/49/])
HBASE-18166 [AMv2] We are splitting already-split files v2 Address (stack: rev 
8c7bf7b0a92beac1dcb1f6a59d1057f7838bdc91)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java


> [AMv2] We are splitting already-split files
> ---
>
> Key: HBASE-18166
> URL: https://issues.apache.org/jira/browse/HBASE-18166
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: HBASE-18166.master.001.patch, 
> HBASE-18166.master.002.patch
>
>
> Interesting issue. The below adds a lag cleaning up files after a compaction 
> in case of on-going Scanners (for read replicas/offheap).
> HBASE-14970 Backport HBASE-13082 and its sub-jira to branch-1 - recommit (Ram)
> What the lag means is that now that split is run from the HMaster in master 
> branch, when it goes to get a listing of the files to split, it can pick up 
> files that are for archiving but that have not been archived yet.  When it 
> does, it goes ahead and splits them... making references of references.
> Its a mess.
> I added asking the Region if it is splittable a while back. The Master calls 
> this from SplitTableRegionProcedure during preparation. If the RegionServer 
> asked for the split, it is sort of redundant work given the RS asks itself if 
> any references still; if any, it'll wait before asking for a split. But if a 
> user/client asks, then this isSplittable over RPC comes in handy.
> I was thinking that isSplittable could return list of files 
> Or, easier, given we know a region is Splittable by the time we go to split 
> the files, then I think master-side we can just skip any references found 
> presuming read-for-archive.
> Will be back with a patch. Want to test on cluster first (Side-effect is 
> regions are offline because file at end of the reference to a reference is 
> removed ... and so the open fails).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18004) getRegionLocations needs to be called once in ScannerCallableWithReplicas#call()

2017-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051143#comment-16051143
 ] 

Hudson commented on HBASE-18004:


FAILURE: Integrated in Jenkins build HBase-2.0 #49 (See 
[https://builds.apache.org/job/HBase-2.0/49/])
HBASE-18004 getRegionLocations needs to be called once in (stack: rev 
4184ae75633bb41ba85c640c3981e2b436d46829)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java


> getRegionLocations  needs to be called once in 
> ScannerCallableWithReplicas#call()
> -
>
> Key: HBASE-18004
> URL: https://issues.apache.org/jira/browse/HBASE-18004
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18004-master-001.patch, 
> HBASE-18004-master-002.patch
>
>
> Look at this line,
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java#L145
> It calls getRegionLocations() to get the primary region's locations. It's 
> usage is to figure out table's region replications. Since table's region 
> replication wont be changed until the table is disabled. It is safe to cache 
> this region replication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17898) Update dependencies

2017-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051142#comment-16051142
 ] 

Hudson commented on HBASE-17898:


FAILURE: Integrated in Jenkins build HBase-2.0 #49 (See 
[https://builds.apache.org/job/HBase-2.0/49/])
HBASE-17898 Update dependencies (stack: rev 
d023508b5b0deb5b2f9b7edaf59b0c95bfdf3b47)
* (edit) hbase-thrift/pom.xml
* (edit) pom.xml


> Update dependencies
> ---
>
> Key: HBASE-17898
> URL: https://issues.apache.org/jira/browse/HBASE-17898
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Balazs Meszaros
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17898-BM-0001.patch, HBASE-17898-BM-0002.patch, 
> HBASE-17898-BM-0003.patch, HBASE-17898-BM-0004.patch
>
>
> General issue to cover updating old, stale dependencies for hbase2 release. 
> Lets make subissues doing each.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18164) Much faster locality cost function and candidate generator

2017-06-15 Thread Kahlil Oppenheimer (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051116#comment-16051116
 ] 

Kahlil Oppenheimer commented on HBASE-18164:


We're running a fork of CDH 5.9 (which is a Cloudera fork of HBase 1.2 with 
patches pulled back from HBase 1.3 and HBase 2.0). 

> Much faster locality cost function and candidate generator
> --
>
> Key: HBASE-18164
> URL: https://issues.apache.org/jira/browse/HBASE-18164
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Critical
> Attachments: HBASE-18164-00.patch, HBASE-18164-01.patch, 
> HBASE-18164-02.patch
>
>
> We noticed that during the stochastic load balancer was not scaling well with 
> cluster size. That is to say that on our smaller clusters (~17 tables, ~12 
> region servers, ~5k regions), the balancer considers ~100,000 cluster 
> configurations in 60s per balancer run, but only ~5,000 per 60s on our bigger 
> clusters (~82 tables, ~160 region servers, ~13k regions) .
> Because of this, our bigger clusters are not able to converge on balance as 
> quickly for things like table skew, region load, etc. because the balancer 
> does not have enough time to "think".
> We have re-written the locality cost function to be incremental, meaning it 
> only recomputes cost based on the most recent region move proposed by the 
> balancer, rather than recomputing the cost across all regions/servers every 
> iteration.
> Further, we also cache the locality of every region on every server at the 
> beginning of the balancer's execution for both the LocalityBasedCostFunction 
> and the LocalityCandidateGenerator to reference. This way, they need not 
> collect all HDFS blocks of every region at each iteration of the balancer.
> The changes have been running in all 6 of our production clusters and all 4 
> QA clusters without issue. The speed improvements we noticed are massive. Our 
> big clusters now consider 20x more cluster configurations.
> One design decision I made is to consider locality cost as the difference 
> between the best locality that is possible given the current cluster state, 
> and the currently measured locality. The old locality computation would 
> measure the locality cost as the difference from the current locality and 
> 100% locality, but this new computation instead takes the difference between 
> the current locality for a given region and the best locality for that region 
> in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18164) Much faster locality cost function and candidate generator

2017-06-15 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051108#comment-16051108
 ] 

Chia-Ping Tsai commented on HBASE-18164:


Which hbase is used in your production?

> Much faster locality cost function and candidate generator
> --
>
> Key: HBASE-18164
> URL: https://issues.apache.org/jira/browse/HBASE-18164
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Critical
> Attachments: HBASE-18164-00.patch, HBASE-18164-01.patch, 
> HBASE-18164-02.patch
>
>
> We noticed that during the stochastic load balancer was not scaling well with 
> cluster size. That is to say that on our smaller clusters (~17 tables, ~12 
> region servers, ~5k regions), the balancer considers ~100,000 cluster 
> configurations in 60s per balancer run, but only ~5,000 per 60s on our bigger 
> clusters (~82 tables, ~160 region servers, ~13k regions) .
> Because of this, our bigger clusters are not able to converge on balance as 
> quickly for things like table skew, region load, etc. because the balancer 
> does not have enough time to "think".
> We have re-written the locality cost function to be incremental, meaning it 
> only recomputes cost based on the most recent region move proposed by the 
> balancer, rather than recomputing the cost across all regions/servers every 
> iteration.
> Further, we also cache the locality of every region on every server at the 
> beginning of the balancer's execution for both the LocalityBasedCostFunction 
> and the LocalityCandidateGenerator to reference. This way, they need not 
> collect all HDFS blocks of every region at each iteration of the balancer.
> The changes have been running in all 6 of our production clusters and all 4 
> QA clusters without issue. The speed improvements we noticed are massive. Our 
> big clusters now consider 20x more cluster configurations.
> One design decision I made is to consider locality cost as the difference 
> between the best locality that is possible given the current cluster state, 
> and the currently measured locality. The old locality computation would 
> measure the locality cost as the difference from the current locality and 
> 100% locality, but this new computation instead takes the difference between 
> the current locality for a given region and the best locality for that region 
> in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18004) getRegionLocations needs to be called once in ScannerCallableWithReplicas#call()

2017-06-15 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051087#comment-16051087
 ] 

huaxiang sun commented on HBASE-18004:
--

Thanks Stack, I will put up a branch-1 patch and use Appy's submit-patch.py to 
submit.

> getRegionLocations  needs to be called once in 
> ScannerCallableWithReplicas#call()
> -
>
> Key: HBASE-18004
> URL: https://issues.apache.org/jira/browse/HBASE-18004
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18004-master-001.patch, 
> HBASE-18004-master-002.patch
>
>
> Look at this line,
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java#L145
> It calls getRegionLocations() to get the primary region's locations. It's 
> usage is to figure out table's region replications. Since table's region 
> replication wont be changed until the table is disabled. It is safe to cache 
> this region replication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18104) [AMv2] Enable aggregation of RPCs (assigns/unassigns, etc.)

2017-06-15 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-18104:
-
Attachment: HBASE-18104.master.001.patch

Unit test (TestAssignmentManager) uses mock which always aggregates. So added 
trace level log message and verified manually on a single node cluster.

> [AMv2] Enable aggregation of RPCs (assigns/unassigns, etc.)
> ---
>
> Key: HBASE-18104
> URL: https://issues.apache.org/jira/browse/HBASE-18104
> Project: HBase
>  Issue Type: Sub-task
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: HBASE-18104.master.001.patch
>
>
> Machinery is in place to coalesce AMv2 RPCs (Assigns, Unassigns). It needs 
> enabling and verification. From '6.3 We don’t do the aggregating of Assigns' 
> of 
> https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.uuwvci2r2tz4



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18104) [AMv2] Enable aggregation of RPCs (assigns/unassigns, etc.)

2017-06-15 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-18104:
-
Status: Patch Available  (was: In Progress)

> [AMv2] Enable aggregation of RPCs (assigns/unassigns, etc.)
> ---
>
> Key: HBASE-18104
> URL: https://issues.apache.org/jira/browse/HBASE-18104
> Project: HBase
>  Issue Type: Sub-task
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: HBASE-18104.master.001.patch
>
>
> Machinery is in place to coalesce AMv2 RPCs (Assigns, Unassigns). It needs 
> enabling and verification. From '6.3 We don’t do the aggregating of Assigns' 
> of 
> https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.uuwvci2r2tz4



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Work started] (HBASE-18104) [AMv2] Enable aggregation of RPCs (assigns/unassigns, etc.)

2017-06-15 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-18104 started by Umesh Agashe.

> [AMv2] Enable aggregation of RPCs (assigns/unassigns, etc.)
> ---
>
> Key: HBASE-18104
> URL: https://issues.apache.org/jira/browse/HBASE-18104
> Project: HBase
>  Issue Type: Sub-task
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
>
> Machinery is in place to coalesce AMv2 RPCs (Assigns, Unassigns). It needs 
> enabling and verification. From '6.3 We don’t do the aggregating of Assigns' 
> of 
> https://docs.google.com/document/d/1eVKa7FHdeoJ1-9o8yZcOTAQbv0u0bblBlCCzVSIn69g/edit#heading=h.uuwvci2r2tz4



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18224) Upgrate jetty and thrift

2017-06-15 Thread Balazs Meszaros (JIRA)
Balazs Meszaros created HBASE-18224:
---

 Summary: Upgrate jetty and thrift
 Key: HBASE-18224
 URL: https://issues.apache.org/jira/browse/HBASE-18224
 Project: HBase
  Issue Type: Sub-task
Reporter: Balazs Meszaros


Jetty can be updated to 9.4.6 and thrift can be updated to 0.10.0. I tried to 
update them in HBASE-17898 but some unit tests failed, so created a sub-task 
for them.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (HBASE-18134) Re-think if the FileSystemUtilizationChore is still necessary

2017-06-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved HBASE-18134.

Resolution: Won't Fix

I did some more thinking about this while working on HBASE-18135.

A goal/feature of HBASE-16961 was that when a RegionServer fails to regularly 
submit size reports for a Region, if a significant percentage of the Regions 
are missing (e.g. >5% of regions by default), the Master will not enforce a 
quota violation on the table. This is meant to be a failsafe for regions stuck 
in transition or generic bugs/flakiness of RegionServers.

This feature is implemented by the Master aging off recorded sizes for regions 
after a given amount of time. As long as the size is (re)reported by a 
RegionServer, the master continues to acknowledge the size of a Region.

If the FileSystemUtilizationChore is removed, the Master will age-off the size 
reports for regions which are idle but may contain space. This would result in 
a situation where the Master would stop enforcing a violation policy for a 
table over quota and not accepting new updates. As such, we cannot implement 
this improvement while also doing the region size report age-off.

My feeling is to avoid the optimization described in this ticket and see what 
some real-life usage of the feature brings. We have metrics which will help us 
understand, at scale, what the impact of this chore is. If scanning the Region 
size on disk is of large impact, we can re-consider.

> Re-think if the FileSystemUtilizationChore is still necessary
> -
>
> Key: HBASE-18134
> URL: https://issues.apache.org/jira/browse/HBASE-18134
> Project: HBase
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>
> On the heels of HBASE-18133, we need to put some thought into whether or not 
> there are cases in which the RegionServer should still report sizes directly 
> from HDFS.
> The cases I have in mind are primarily in the face of RS failure/restart. 
> Ideally, we could get rid of this chore completely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18004) getRegionLocations needs to be called once in ScannerCallableWithReplicas#call()

2017-06-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18004:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master and branch-2. Put up a branch-1 patch [~huaxiang] and I'll 
backport. FYI, have you seen ./dev-support/submit-patch.py? Ask [~appy] about 
it.

Thanks for the patch sir.

> getRegionLocations  needs to be called once in 
> ScannerCallableWithReplicas#call()
> -
>
> Key: HBASE-18004
> URL: https://issues.apache.org/jira/browse/HBASE-18004
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-18004-master-001.patch, 
> HBASE-18004-master-002.patch
>
>
> Look at this line,
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java#L145
> It calls getRegionLocations() to get the primary region's locations. It's 
> usage is to figure out table's region replications. Since table's region 
> replication wont be changed until the table is disabled. It is safe to cache 
> this region replication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18164) Much faster locality cost function and candidate generator

2017-06-15 Thread Kahlil Oppenheimer (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051024#comment-16051024
 ] 

Kahlil Oppenheimer commented on HBASE-18164:


I'm just running a few follow-up tests to see if we can remove my explicit 
locality cache without hurting performance. I should have a final patch ready 
by tomorrow morning.

> Much faster locality cost function and candidate generator
> --
>
> Key: HBASE-18164
> URL: https://issues.apache.org/jira/browse/HBASE-18164
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Critical
> Attachments: HBASE-18164-00.patch, HBASE-18164-01.patch, 
> HBASE-18164-02.patch
>
>
> We noticed that during the stochastic load balancer was not scaling well with 
> cluster size. That is to say that on our smaller clusters (~17 tables, ~12 
> region servers, ~5k regions), the balancer considers ~100,000 cluster 
> configurations in 60s per balancer run, but only ~5,000 per 60s on our bigger 
> clusters (~82 tables, ~160 region servers, ~13k regions) .
> Because of this, our bigger clusters are not able to converge on balance as 
> quickly for things like table skew, region load, etc. because the balancer 
> does not have enough time to "think".
> We have re-written the locality cost function to be incremental, meaning it 
> only recomputes cost based on the most recent region move proposed by the 
> balancer, rather than recomputing the cost across all regions/servers every 
> iteration.
> Further, we also cache the locality of every region on every server at the 
> beginning of the balancer's execution for both the LocalityBasedCostFunction 
> and the LocalityCandidateGenerator to reference. This way, they need not 
> collect all HDFS blocks of every region at each iteration of the balancer.
> The changes have been running in all 6 of our production clusters and all 4 
> QA clusters without issue. The speed improvements we noticed are massive. Our 
> big clusters now consider 20x more cluster configurations.
> One design decision I made is to consider locality cost as the difference 
> between the best locality that is possible given the current cluster state, 
> and the currently measured locality. The old locality computation would 
> measure the locality cost as the difference from the current locality and 
> 100% locality, but this new computation instead takes the difference between 
> the current locality for a given region and the best locality for that region 
> in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18164) Much faster locality cost function and candidate generator

2017-06-15 Thread Kahlil Oppenheimer (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051024#comment-16051024
 ] 

Kahlil Oppenheimer edited comment on HBASE-18164 at 6/15/17 8:37 PM:
-

I'm just running a few follow-up tests to see if we can remove the new explicit 
locality cache without hurting performance. I should have a final patch ready 
by tomorrow morning.


was (Author: kahliloppenheimer):
I'm just running a few follow-up tests to see if we can remove my explicit 
locality cache without hurting performance. I should have a final patch ready 
by tomorrow morning.

> Much faster locality cost function and candidate generator
> --
>
> Key: HBASE-18164
> URL: https://issues.apache.org/jira/browse/HBASE-18164
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Critical
> Attachments: HBASE-18164-00.patch, HBASE-18164-01.patch, 
> HBASE-18164-02.patch
>
>
> We noticed that during the stochastic load balancer was not scaling well with 
> cluster size. That is to say that on our smaller clusters (~17 tables, ~12 
> region servers, ~5k regions), the balancer considers ~100,000 cluster 
> configurations in 60s per balancer run, but only ~5,000 per 60s on our bigger 
> clusters (~82 tables, ~160 region servers, ~13k regions) .
> Because of this, our bigger clusters are not able to converge on balance as 
> quickly for things like table skew, region load, etc. because the balancer 
> does not have enough time to "think".
> We have re-written the locality cost function to be incremental, meaning it 
> only recomputes cost based on the most recent region move proposed by the 
> balancer, rather than recomputing the cost across all regions/servers every 
> iteration.
> Further, we also cache the locality of every region on every server at the 
> beginning of the balancer's execution for both the LocalityBasedCostFunction 
> and the LocalityCandidateGenerator to reference. This way, they need not 
> collect all HDFS blocks of every region at each iteration of the balancer.
> The changes have been running in all 6 of our production clusters and all 4 
> QA clusters without issue. The speed improvements we noticed are massive. Our 
> big clusters now consider 20x more cluster configurations.
> One design decision I made is to consider locality cost as the difference 
> between the best locality that is possible given the current cluster state, 
> and the currently measured locality. The old locality computation would 
> measure the locality cost as the difference from the current locality and 
> 100% locality, but this new computation instead takes the difference between 
> the current locality for a given region and the best locality for that region 
> in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17898) Update dependencies

2017-06-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17898:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master and branch-2. Thanks [~balazs.meszaros]

> Update dependencies
> ---
>
> Key: HBASE-17898
> URL: https://issues.apache.org/jira/browse/HBASE-17898
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Balazs Meszaros
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17898-BM-0001.patch, HBASE-17898-BM-0002.patch, 
> HBASE-17898-BM-0003.patch, HBASE-17898-BM-0004.patch
>
>
> General issue to cover updating old, stale dependencies for hbase2 release. 
> Lets make subissues doing each.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18223) Track the effort to improve/bug fix read replica feature

2017-06-15 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051008#comment-16051008
 ] 

huaxiang sun commented on HBASE-18223:
--

Thanks @stack, there are a few potential issues with hbck which are under 
investigation. Once their root cause is known, jiras will be created and linked 
here.

> Track the effort to improve/bug fix read replica feature
> 
>
> Key: HBASE-18223
> URL: https://issues.apache.org/jira/browse/HBASE-18223
> Project: HBase
>  Issue Type: Task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>
> During the hbasecon 2017, a group of people met and agreed to collaborate the 
> effort to improve/bug fix read replica feature so users can enable this 
> feature in their clusters. This jira is created to track jiras which are 
> known related with read replica feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17898) Update dependencies

2017-06-15 Thread Balazs Meszaros (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051006#comment-16051006
 ] 

Balazs Meszaros commented on HBASE-17898:
-

It seems that these tests also fails on other commits, too.

> Update dependencies
> ---
>
> Key: HBASE-17898
> URL: https://issues.apache.org/jira/browse/HBASE-17898
> Project: HBase
>  Issue Type: Task
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Balazs Meszaros
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: HBASE-17898-BM-0001.patch, HBASE-17898-BM-0002.patch, 
> HBASE-17898-BM-0003.patch, HBASE-17898-BM-0004.patch
>
>
> General issue to cover updating old, stale dependencies for hbase2 release. 
> Lets make subissues doing each.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18004) getRegionLocations needs to be called once in ScannerCallableWithReplicas#call()

2017-06-15 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16051004#comment-16051004
 ] 

huaxiang sun commented on HBASE-18004:
--

Thanks Stack. I think it is good for master and branch-1.

> getRegionLocations  needs to be called once in 
> ScannerCallableWithReplicas#call()
> -
>
> Key: HBASE-18004
> URL: https://issues.apache.org/jira/browse/HBASE-18004
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-18004-master-001.patch, 
> HBASE-18004-master-002.patch
>
>
> Look at this line,
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java#L145
> It calls getRegionLocations() to get the primary region's locations. It's 
> usage is to figure out table's region replications. Since table's region 
> replication wont be changed until the table is disabled. It is safe to cache 
> this region replication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18004) getRegionLocations needs to be called once in ScannerCallableWithReplicas#call()

2017-06-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050994#comment-16050994
 ] 

stack commented on HBASE-18004:
---

Compile was fixed (my fault).

Let me commit this then [~huaxiang]

> getRegionLocations  needs to be called once in 
> ScannerCallableWithReplicas#call()
> -
>
> Key: HBASE-18004
> URL: https://issues.apache.org/jira/browse/HBASE-18004
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-18004-master-001.patch, 
> HBASE-18004-master-002.patch
>
>
> Look at this line,
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java#L145
> It calls getRegionLocations() to get the primary region's locations. It's 
> usage is to figure out table's region replications. Since table's region 
> replication wont be changed until the table is disabled. It is safe to cache 
> this region replication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18004) getRegionLocations needs to be called once in ScannerCallableWithReplicas#call()

2017-06-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050995#comment-16050995
 ] 

stack commented on HBASE-18004:
---

[~huaxiang] What versions you want it in?


> getRegionLocations  needs to be called once in 
> ScannerCallableWithReplicas#call()
> -
>
> Key: HBASE-18004
> URL: https://issues.apache.org/jira/browse/HBASE-18004
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-18004-master-001.patch, 
> HBASE-18004-master-002.patch
>
>
> Look at this line,
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java#L145
> It calls getRegionLocations() to get the primary region's locations. It's 
> usage is to figure out table's region replications. Since table's region 
> replication wont be changed until the table is disabled. It is safe to cache 
> this region replication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18010) Connect CellChunkMap to be used for flattening in CompactingMemStore

2017-06-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050982#comment-16050982
 ] 

stack commented on HBASE-18010:
---

bq. No big cells or upserted cells are going to be supported here.

When big cells, what happens? Where/how will big cells be addressed?

bq. ...we already have the correct Cell in our hands. Why do we do another logN 
traversal in this.delegatee.get()? Looks like a waste of time...?

Did this issue get filed? Link it here if it did?

Looking up on rbnow






> Connect CellChunkMap to be used for flattening in CompactingMemStore
> 
>
> Key: HBASE-18010
> URL: https://issues.apache.org/jira/browse/HBASE-18010
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-18010-V04.patch
>
>
> The CellChunkMap helps to create a new type of ImmutableSegment, where the 
> index (CellSet's delegatee) is going to be CellChunkMap. No big cells or 
> upserted cells are going to be supported here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16415) Replication in different namespace

2017-06-15 Thread Jan Kunigk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050980#comment-16050980
 ] 

Jan Kunigk commented on HBASE-16415:


Yes, that makes sense, thanks for clarifying.
I would extend {code} HBaseInterClusterReplicationEndpoint {code}.

Any restriction / guidance on the naming? I would use {code} 
HBaseRedirectingInterClusterReplicationEndpoint {code} for the new subclass.

J

> Replication in different namespace
> --
>
> Key: HBASE-16415
> URL: https://issues.apache.org/jira/browse/HBASE-16415
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Christian Guegi
>Assignee: Jan Kunigk
>
> It would be nice to replicate tables from one namespace to another namespace.
> Example:
> Master cluster, namespace=default, table=bar
> Slave cluster, namespace=dr, table=bar
> Replication happens in class ReplicationSink:
>   public void replicateEntries(List entries, final CellScanner 
> cells, ...){
> ...
> TableName table = 
> TableName.valueOf(entry.getKey().getTableName().toByteArray());
> ...
> addToHashMultiMap(rowMap, table, clusterIds, m);
> ...
> for (Entry> entry : 
> rowMap.entrySet()) {
>   batch(entry.getKey(), entry.getValue().values());
> }
>}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18004) getRegionLocations needs to be called once in ScannerCallableWithReplicas#call()

2017-06-15 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050970#comment-16050970
 ] 

huaxiang sun commented on HBASE-18004:
--

Thanks Stack. I think it is a minor as most of the case, getRegionLocations() 
is called against local meta cache, :)

Also found that the latest master got a compiling error, not sure if it is 
taken care of.

{code}
diff --git 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
index ff7e60f..219b67b 100644
--- 
a/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
+++ 
b/hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
@@ -536,13 +536,13 @@ public class SplitTableRegionProcedure
 final TableDescriptor htd = 
env.getMasterServices().getTableDescriptors().get(getTableName());
 for (Map.Entrye: files.entrySet()) {
   byte [] familyName = Bytes.toBytes(e.getKey());
-  final HColumnDescriptor hcd = htd.getFamily(familyName);
+  final ColumnFamilyDescriptor hcd = htd.getColumnFamily(familyName);
   final Collection storeFiles = e.getValue();
   if (storeFiles != null && storeFiles.size() > 0) {
 final CacheConfig cacheConf = new CacheConfig(conf, hcd);
 for (StoreFileInfo storeFileInfo: storeFiles) {
   StoreFileSplitter sfs =
-  new StoreFileSplitter(regionFs, family.getBytes(), new 
HStoreFile(mfs.getFileSystem(),
+  new StoreFileSplitter(regionFs, familyName, new 
HStoreFile(mfs.getFileSystem(),
   storeFileInfo, conf, cacheConf, hcd.getBloomFilterType(), 
true));
   futures.add(threadPool.submit(sfs));
 }
hsun-MBP:hbase hsun$ 

{code}

> getRegionLocations  needs to be called once in 
> ScannerCallableWithReplicas#call()
> -
>
> Key: HBASE-18004
> URL: https://issues.apache.org/jira/browse/HBASE-18004
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-18004-master-001.patch, 
> HBASE-18004-master-002.patch
>
>
> Look at this line,
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java#L145
> It calls getRegionLocations() to get the primary region's locations. It's 
> usage is to figure out table's region replications. Since table's region 
> replication wont be changed until the table is disabled. It is safe to cache 
> this region replication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18220) Compaction scanners need not reopen storefile scanners while trying to switch over from pread to stream

2017-06-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050967#comment-16050967
 ] 

stack commented on HBASE-18220:
---

[~ram_krish]

nit: The new param should be in ScanInfo rather than as a new parameter on the 
constructor?

Otherwise +1 Can fix above on commit.



> Compaction scanners need not reopen storefile scanners while trying to switch 
> over from pread to stream
> ---
>
> Key: HBASE-18220
> URL: https://issues.apache.org/jira/browse/HBASE-18220
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 3.0.0, 2.0.0-alpha-1
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18220.patch
>
>
> We try switch over to stream scanner if we have read more than a certain 
> number of bytes. In case of compaction we already have stream based scanners 
> only and but on calling shipped() we try to again close and reopen the 
> scanners which is unwanted. 
> [~Apache9]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18223) Track the effort to improve/bug fix read replica feature

2017-06-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050964#comment-16050964
 ] 

stack commented on HBASE-18223:
---

This is great [~huaxiang]  Is there other stuff needed to make this feature 
more robust boss?

> Track the effort to improve/bug fix read replica feature
> 
>
> Key: HBASE-18223
> URL: https://issues.apache.org/jira/browse/HBASE-18223
> Project: HBase
>  Issue Type: Task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>
> During the hbasecon 2017, a group of people met and agreed to collaborate the 
> effort to improve/bug fix read replica feature so users can enable this 
> feature in their clusters. This jira is created to track jiras which are 
> known related with read replica feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18004) getRegionLocations needs to be called once in ScannerCallableWithReplicas#call()

2017-06-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050960#comment-16050960
 ] 

stack commented on HBASE-18004:
---

Patch looks good [~huaxiang] This is just a minor optimization?

> getRegionLocations  needs to be called once in 
> ScannerCallableWithReplicas#call()
> -
>
> Key: HBASE-18004
> URL: https://issues.apache.org/jira/browse/HBASE-18004
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-18004-master-001.patch, 
> HBASE-18004-master-002.patch
>
>
> Look at this line,
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java#L145
> It calls getRegionLocations() to get the primary region's locations. It's 
> usage is to figure out table's region replications. Since table's region 
> replication wont be changed until the table is disabled. It is safe to cache 
> this region replication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17752) Update reporting RPCs/Shell commands to break out space utilization by snapshot

2017-06-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-17752:
---
Attachment: HBASE-17752.003.patch

Reattaching 003 as it failed the run due to master itself being broken.

> Update reporting RPCs/Shell commands to break out space utilization by 
> snapshot
> ---
>
> Key: HBASE-17752
> URL: https://issues.apache.org/jira/browse/HBASE-17752
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HBASE-17752.001.patch, HBASE-17752.002.patch, 
> HBASE-17752.003.patch
>
>
> For adminstrators running HBase with space quotas, it is useful to provide a 
> breakdown of the utilization of a table. For example, it may be non-intuitive 
> that a table's utilization is primarily made up of snapshots. We should 
> provide a new command or modify existing commands such that an admin can see 
> the utilization for a table/ns:
> e.g.
> {noformat}
> table1:   17GB
>   resident:   10GB
>   snapshot_a: 5GB
>   snapshot_b: 2GB
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-17752) Update reporting RPCs/Shell commands to break out space utilization by snapshot

2017-06-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-17752:
---
Attachment: (was: HBASE-17752.003.patch)

> Update reporting RPCs/Shell commands to break out space utilization by 
> snapshot
> ---
>
> Key: HBASE-17752
> URL: https://issues.apache.org/jira/browse/HBASE-17752
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HBASE-17752.001.patch, HBASE-17752.002.patch, 
> HBASE-17752.003.patch
>
>
> For adminstrators running HBase with space quotas, it is useful to provide a 
> breakdown of the utilization of a table. For example, it may be non-intuitive 
> that a table's utilization is primarily made up of snapshots. We should 
> provide a new command or modify existing commands such that an admin can see 
> the utilization for a table/ns:
> e.g.
> {noformat}
> table1:   17GB
>   resident:   10GB
>   snapshot_a: 5GB
>   snapshot_b: 2GB
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18212) In Standalone mode with local filesystem HBase logs Warning message:Failed to invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream

2017-06-15 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050946#comment-16050946
 ] 

Umesh Agashe commented on HBASE-18212:
--

Does the log level depend on filesystem being used? Warning for HDFS but no 
log/ trace for local filesystem?

> In Standalone mode with local filesystem HBase logs Warning message:Failed to 
> invoke 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream
> 
>
> Key: HBASE-18212
> URL: https://issues.apache.org/jira/browse/HBASE-18212
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Umesh Agashe
>
> New users may get nervous after seeing following warning level log messages 
> (considering new users will most likely run HBase in Standalone mode first):
> {code}
> WARN  [MemStoreFlusher.1] io.FSDataInputStreamWrapper: Failed to invoke 
> 'unbuffer' method in class class org.apache.hadoop.fs.FSDataInputStream . So 
> there may be a TCP socket connection left open in CLOSE_WAIT state.
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17752) Update reporting RPCs/Shell commands to break out space utilization by snapshot

2017-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050921#comment-16050921
 ] 

Hadoop QA commented on HBASE-17752:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 39s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 35s 
{color} | {color:red} root in master failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 24s 
{color} | {color:red} hbase-server in master failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
48s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
56s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 21s 
{color} | {color:red} hbase-server in master failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 24s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 23s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 23s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 7s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 13s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 22s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 28s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 35s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 40s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.7.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 48s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.7.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 8m 55s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.7.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 1s 
{color} | {color:red} The patch causes 20 errors with Hadoop v3.0.0-alpha3. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 21s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} hbase-client generated 0 new + 1 unchanged - 1 fixed = 
1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 

[jira] [Commented] (HBASE-18164) Much faster locality cost function and candidate generator

2017-06-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050914#comment-16050914
 ] 

Sean Busbey commented on HBASE-18164:
-

I think this is ready to merge.

bq. Interesting, I hadn't realized that the HDFS blocks are cached in the 
RegionLocationFinder. I will benchmark the code tomorrow with/without the 
RegionLocationFinder to see if it was adding latency.

Any follow up needed here?

> Much faster locality cost function and candidate generator
> --
>
> Key: HBASE-18164
> URL: https://issues.apache.org/jira/browse/HBASE-18164
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Reporter: Kahlil Oppenheimer
>Assignee: Kahlil Oppenheimer
>Priority: Critical
> Attachments: HBASE-18164-00.patch, HBASE-18164-01.patch, 
> HBASE-18164-02.patch
>
>
> We noticed that during the stochastic load balancer was not scaling well with 
> cluster size. That is to say that on our smaller clusters (~17 tables, ~12 
> region servers, ~5k regions), the balancer considers ~100,000 cluster 
> configurations in 60s per balancer run, but only ~5,000 per 60s on our bigger 
> clusters (~82 tables, ~160 region servers, ~13k regions) .
> Because of this, our bigger clusters are not able to converge on balance as 
> quickly for things like table skew, region load, etc. because the balancer 
> does not have enough time to "think".
> We have re-written the locality cost function to be incremental, meaning it 
> only recomputes cost based on the most recent region move proposed by the 
> balancer, rather than recomputing the cost across all regions/servers every 
> iteration.
> Further, we also cache the locality of every region on every server at the 
> beginning of the balancer's execution for both the LocalityBasedCostFunction 
> and the LocalityCandidateGenerator to reference. This way, they need not 
> collect all HDFS blocks of every region at each iteration of the balancer.
> The changes have been running in all 6 of our production clusters and all 4 
> QA clusters without issue. The speed improvements we noticed are massive. Our 
> big clusters now consider 20x more cluster configurations.
> One design decision I made is to consider locality cost as the difference 
> between the best locality that is possible given the current cluster state, 
> and the currently measured locality. The old locality computation would 
> measure the locality cost as the difference from the current locality and 
> 100% locality, but this new computation instead takes the difference between 
> the current locality for a given region and the best locality for that region 
> in the cluster.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18223) Track the effort to improve/bug fix read replica feature

2017-06-15 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050911#comment-16050911
 ] 

huaxiang sun edited comment on HBASE-18223 at 6/15/17 6:26 PM:
---

HBASE-18004 is an improvement to reduce call to meta, it is not specific to 
read replica, however, it is related so link it here as well.


was (Author: huaxiang):
This is an improvement to reduce call to meta, it is not specific to read 
replica, however, it is related so link it here as well.

> Track the effort to improve/bug fix read replica feature
> 
>
> Key: HBASE-18223
> URL: https://issues.apache.org/jira/browse/HBASE-18223
> Project: HBase
>  Issue Type: Task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>
> During the hbasecon 2017, a group of people met and agreed to collaborate the 
> effort to improve/bug fix read replica feature so users can enable this 
> feature in their clusters. This jira is created to track jiras which are 
> known related with read replica feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18223) Track the effort to improve/bug fix read replica feature

2017-06-15 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050911#comment-16050911
 ] 

huaxiang sun commented on HBASE-18223:
--

This is an improvement to reduce call to meta, it is not specific to read 
replica, however, it is related so link it here as well.

> Track the effort to improve/bug fix read replica feature
> 
>
> Key: HBASE-18223
> URL: https://issues.apache.org/jira/browse/HBASE-18223
> Project: HBase
>  Issue Type: Task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>
> During the hbasecon 2017, a group of people met and agreed to collaborate the 
> effort to improve/bug fix read replica feature so users can enable this 
> feature in their clusters. This jira is created to track jiras which are 
> known related with read replica feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-9393) Hbase does not closing a closed socket resulting in many CLOSE_WAIT

2017-06-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050909#comment-16050909
 ] 

Sean Busbey commented on HBASE-9393:


I agree, good for backport. [~mantonov] any concerns for branch-1.3?

> Hbase does not closing a closed socket resulting in many CLOSE_WAIT 
> 
>
> Key: HBASE-9393
> URL: https://issues.apache.org/jira/browse/HBASE-9393
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.2, 0.98.0, 1.0.1.1, 1.1.2
> Environment: Centos 6.4 - 7 regionservers/datanodes, 8 TB per node, 
> 7279 regions
>Reporter: Avi Zrachya
>Assignee: Ashish Singhi
>Priority: Critical
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-9393-branch-1.patch, HBASE-9393.patch, 
> HBASE-9393.v10.patch, HBASE-9393.v11.patch, HBASE-9393.v12.patch, 
> HBASE-9393.v13.patch, HBASE-9393.v14.patch, HBASE-9393.v15.patch, 
> HBASE-9393.v15.patch, HBASE-9393.v16.patch, HBASE-9393.v16.patch, 
> HBASE-9393.v17.patch, HBASE-9393.v18.patch, HBASE-9393.v1.patch, 
> HBASE-9393.v2.patch, HBASE-9393.v3.patch, HBASE-9393.v4.patch, 
> HBASE-9393.v5.patch, HBASE-9393.v5.patch, HBASE-9393.v5.patch, 
> HBASE-9393.v6.patch, HBASE-9393.v6.patch, HBASE-9393.v6.patch, 
> HBASE-9393.v7.patch, HBASE-9393.v8.patch, HBASE-9393.v9.patch
>
>
> HBase dose not close a dead connection with the datanode.
> This resulting in over 60K CLOSE_WAIT and at some point HBase can not connect 
> to the datanode because too many mapped sockets from one host to another on 
> the same port.
> The example below is with low CLOSE_WAIT count because we had to restart 
> hbase to solve the porblem, later in time it will incease to 60-100K sockets 
> on CLOSE_WAIT
> [root@hd2-region3 ~]# netstat -nap |grep CLOSE_WAIT |grep 21592 |wc -l
> 13156
> [root@hd2-region3 ~]# ps -ef |grep 21592
> root 17255 17219  0 12:26 pts/000:00:00 grep 21592
> hbase21592 1 17 Aug29 ?03:29:06 
> /usr/java/jdk1.6.0_26/bin/java -XX:OnOutOfMemoryError=kill -9 %p -Xmx8000m 
> -ea -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode 
> -Dhbase.log.dir=/var/log/hbase 
> -Dhbase.log.file=hbase-hbase-regionserver-hd2-region3.swnet.corp.log ...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18004) getRegionLocations needs to be called once in ScannerCallableWithReplicas#call()

2017-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050878#comment-16050878
 ] 

Hadoop QA commented on HBASE-18004:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 49s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 1m 54s 
{color} | {color:red} root in master failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 27s 
{color} | {color:red} hbase-server in master failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 35s 
{color} | {color:red} hbase-server in master failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 30s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 31s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 31s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 49s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 12s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 36s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 0s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.4. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 26s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.6.5. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 9m 2s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.7.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 10m 42s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.7.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 12m 24s 
{color} | {color:red} The patch causes 20 errors with Hadoop v2.7.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 14m 2s 
{color} | {color:red} The patch causes 20 errors with Hadoop v3.0.0-alpha3. 
{color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 37s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} hbase-client generated 0 new + 1 unchanged - 1 fixed = 
1 total (was 2) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 

[jira] [Updated] (HBASE-17752) Update reporting RPCs/Shell commands to break out space utilization by snapshot

2017-06-15 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-17752:
---
Attachment: HBASE-17752.003.patch

.003 Fix that pesky compilation problem.

> Update reporting RPCs/Shell commands to break out space utilization by 
> snapshot
> ---
>
> Key: HBASE-17752
> URL: https://issues.apache.org/jira/browse/HBASE-17752
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: HBASE-17752.001.patch, HBASE-17752.002.patch, 
> HBASE-17752.003.patch
>
>
> For adminstrators running HBase with space quotas, it is useful to provide a 
> breakdown of the utilization of a table. For example, it may be non-intuitive 
> that a table's utilization is primarily made up of snapshots. We should 
> provide a new command or modify existing commands such that an admin can see 
> the utilization for a table/ns:
> e.g.
> {noformat}
> table1:   17GB
>   resident:   10GB
>   snapshot_a: 5GB
>   snapshot_b: 2GB
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18223) Track the effort to improve/bug fix read replica feature

2017-06-15 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050846#comment-16050846
 ] 

huaxiang sun commented on HBASE-18223:
--

I linked the list of jiras based on the discussion so far.

> Track the effort to improve/bug fix read replica feature
> 
>
> Key: HBASE-18223
> URL: https://issues.apache.org/jira/browse/HBASE-18223
> Project: HBase
>  Issue Type: Task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>
> During the hbasecon 2017, a group of people met and agreed to collaborate the 
> effort to improve/bug fix read replica feature so users can enable this 
> feature in their clusters. This jira is created to track jiras which are 
> known related with read replica feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-18223) Track the effort to improve/bug fix read replica feature

2017-06-15 Thread huaxiang sun (JIRA)
huaxiang sun created HBASE-18223:


 Summary: Track the effort to improve/bug fix read replica feature
 Key: HBASE-18223
 URL: https://issues.apache.org/jira/browse/HBASE-18223
 Project: HBase
  Issue Type: Task
  Components: Client
Affects Versions: 2.0.0
Reporter: huaxiang sun


During the hbasecon 2017, a group of people met and agreed to collaborate the 
effort to improve/bug fix read replica feature so users can enable this feature 
in their clusters. This jira is created to track jiras which are known related 
with read replica feature.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18166) [AMv2] We are splitting already-split files

2017-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050833#comment-16050833
 ] 

Hudson commented on HBASE-18166:


FAILURE: Integrated in Jenkins build HBase-2.0 #48 (See 
[https://builds.apache.org/job/HBase-2.0/48/])
HBASE-18166 [AMv2] We are splitting already-split files v2 Address (stack: rev 
c02a1421437b65c127ae1e985edbd507b0d1696b)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/assignment/SplitTableRegionProcedure.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/IncreasingToUpperBoundRegionSplitPolicy.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileInfo.java


> [AMv2] We are splitting already-split files
> ---
>
> Key: HBASE-18166
> URL: https://issues.apache.org/jira/browse/HBASE-18166
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: HBASE-18166.master.001.patch, 
> HBASE-18166.master.002.patch
>
>
> Interesting issue. The below adds a lag cleaning up files after a compaction 
> in case of on-going Scanners (for read replicas/offheap).
> HBASE-14970 Backport HBASE-13082 and its sub-jira to branch-1 - recommit (Ram)
> What the lag means is that now that split is run from the HMaster in master 
> branch, when it goes to get a listing of the files to split, it can pick up 
> files that are for archiving but that have not been archived yet.  When it 
> does, it goes ahead and splits them... making references of references.
> Its a mess.
> I added asking the Region if it is splittable a while back. The Master calls 
> this from SplitTableRegionProcedure during preparation. If the RegionServer 
> asked for the split, it is sort of redundant work given the RS asks itself if 
> any references still; if any, it'll wait before asking for a split. But if a 
> user/client asks, then this isSplittable over RPC comes in handy.
> I was thinking that isSplittable could return list of files 
> Or, easier, given we know a region is Splittable by the time we go to split 
> the files, then I think master-side we can just skip any references found 
> presuming read-for-archive.
> Will be back with a patch. Want to test on cluster first (Side-effect is 
> regions are offline because file at end of the reference to a reference is 
> removed ... and so the open fails).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-17678) FilterList with MUST_PASS_ONE may lead to redundant cells returned

2017-06-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050826#comment-16050826
 ] 

Sean Busbey commented on HBASE-17678:
-

yeah, I'd agree this should go into the maintenance releases.

> FilterList with MUST_PASS_ONE may lead to redundant cells returned
> --
>
> Key: HBASE-17678
> URL: https://issues.apache.org/jira/browse/HBASE-17678
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 2.0.0, 1.3.0, 1.2.1
> Environment: RedHat 7.x
>Reporter: Jason Tokayer
>Assignee: Zheng Hu
> Attachments: HBASE-17678.addendum.patch, HBASE-17678.addendum.patch, 
> HBASE-17678.branch-1.1.v1.patch, HBASE-17678.branch-1.1.v2.patch, 
> HBASE-17678.branch-1.1.v2.patch, HBASE-17678.branch-1.v1.patch, 
> HBASE-17678.branch-1.v1.patch, HBASE-17678.branch-1.v2.patch, 
> HBASE-17678.branch-1.v2.patch, HBASE-17678.v1.patch, 
> HBASE-17678.v1.rough.patch, HBASE-17678.v2.patch, HBASE-17678.v3.patch, 
> HBASE-17678.v4.patch, HBASE-17678.v4.patch, HBASE-17678.v5.patch, 
> HBASE-17678.v6.patch, HBASE-17678.v7.patch, HBASE-17678.v7.patch, 
> TestColumnPaginationFilterDemo.java
>
>
> When combining ColumnPaginationFilter with a single-element filterList, 
> MUST_PASS_ONE and MUST_PASS_ALL give different results when there are 
> multiple cells with the same timestamp. This is unexpected since there is 
> only a single filter in the list, and I would believe that MUST_PASS_ALL and 
> MUST_PASS_ONE should only affect the behavior of the joined filter and not 
> the behavior of any one of the individual filters. If this is not a bug then 
> it would be nice if the documentation is updated to explain this nuanced 
> behavior.
> I know that there was a decision made in an earlier Hbase version to keep 
> multiple cells with the same timestamp. This is generally fine but presents 
> an issue when using the aforementioned filter combination.
> Steps to reproduce:
> In the shell create a table and insert some data:
> {code:none}
> create 'ns:tbl',{NAME => 'family',VERSIONS => 100}
> put 'ns:tbl','row','family:name','John',1
> put 'ns:tbl','row','family:name','Jane',1
> put 'ns:tbl','row','family:name','Gil',1
> put 'ns:tbl','row','family:name','Jane',1
> {code}
> Then, use a Scala client as:
> {code:none}
> import org.apache.hadoop.hbase.filter._
> import org.apache.hadoop.hbase.util.Bytes
> import org.apache.hadoop.hbase.client._
> import org.apache.hadoop.hbase.{CellUtil, HBaseConfiguration, TableName}
> import scala.collection.mutable._
> val config = HBaseConfiguration.create()
> config.set("hbase.zookeeper.quorum", "localhost")
> config.set("hbase.zookeeper.property.clientPort", "2181")
> val connection = ConnectionFactory.createConnection(config)
> val logicalOp = FilterList.Operator.MUST_PASS_ONE
> val limit = 1
> var resultsList = ListBuffer[String]()
> for (offset <- 0 to 20 by limit) {
>   val table = connection.getTable(TableName.valueOf("ns:tbl"))
>   val paginationFilter = new ColumnPaginationFilter(limit,offset)
>   val filterList: FilterList = new FilterList(logicalOp,paginationFilter)
>   println("@ filterList = "+filterList)
>   val results = table.get(new 
> Get(Bytes.toBytes("row")).setFilter(filterList))
>   val cells = results.rawCells()
>   if (cells != null) {
>   for (cell <- cells) {
> val value = new String(CellUtil.cloneValue(cell))
> val qualifier = new String(CellUtil.cloneQualifier(cell))
> val family = new String(CellUtil.cloneFamily(cell))
> val result = "OFFSET = "+offset+":"+family + "," + qualifier 
> + "," + value + "," + cell.getTimestamp()
> resultsList.append(result)
>   }
>   }
> }
> resultsList.foreach(println)
> {code}
> Here are the results for different limit and logicalOp settings:
> {code:none}
> Limit = 1 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 1 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 1:family,name,Gil,1
> OFFSET = 2:family,name,Jane,1
> OFFSET = 3:family,name,John,1
> Limit = 2 & logicalOp = MUST_PASS_ALL:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> Limit = 2 & logicalOp = MUST_PASS_ONE:
> scala> resultsList.foreach(println)
> OFFSET = 0:family,name,Jane,1
> OFFSET = 2:family,name,Jane,1
> {code}
> So, it seems that MUST_PASS_ALL gives the expected behavior, but 
> MUST_PASS_ONE does not. Furthermore, MUST_PASS_ONE seems to give only a 
> single 

[jira] [Resolved] (HBASE-18166) [AMv2] We are splitting already-split files

2017-06-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-18166.
---
Resolution: Fixed

Pushed branch-2 and master.

> [AMv2] We are splitting already-split files
> ---
>
> Key: HBASE-18166
> URL: https://issues.apache.org/jira/browse/HBASE-18166
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: HBASE-18166.master.001.patch, 
> HBASE-18166.master.002.patch
>
>
> Interesting issue. The below adds a lag cleaning up files after a compaction 
> in case of on-going Scanners (for read replicas/offheap).
> HBASE-14970 Backport HBASE-13082 and its sub-jira to branch-1 - recommit (Ram)
> What the lag means is that now that split is run from the HMaster in master 
> branch, when it goes to get a listing of the files to split, it can pick up 
> files that are for archiving but that have not been archived yet.  When it 
> does, it goes ahead and splits them... making references of references.
> Its a mess.
> I added asking the Region if it is splittable a while back. The Master calls 
> this from SplitTableRegionProcedure during preparation. If the RegionServer 
> asked for the split, it is sort of redundant work given the RS asks itself if 
> any references still; if any, it'll wait before asking for a split. But if a 
> user/client asks, then this isSplittable over RPC comes in handy.
> I was thinking that isSplittable could return list of files 
> Or, easier, given we know a region is Splittable by the time we go to split 
> the files, then I think master-side we can just skip any references found 
> presuming read-for-archive.
> Will be back with a patch. Want to test on cluster first (Side-effect is 
> regions are offline because file at end of the reference to a reference is 
> removed ... and so the open fails).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18137) Replication gets stuck for empty WALs

2017-06-15 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050813#comment-16050813
 ] 

Sean Busbey commented on HBASE-18137:
-

excellent update on the release note. thanks!

> Replication gets stuck for empty WALs
> -
>
> Key: HBASE-18137
> URL: https://issues.apache.org/jira/browse/HBASE-18137
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 1.3.1
>Reporter: Ashu Pachauri
>Assignee: Vincent Poon
>Priority: Critical
> Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-18137.branch-1.3.v1.patch, 
> HBASE-18137.branch-1.3.v2.patch, HBASE-18137.branch-1.3.v3.patch, 
> HBASE-18137.branch-1.v1.patch, HBASE-18137.branch-1.v2.patch, 
> HBASE-18137.master.v1.patch
>
>
> Replication assumes that only the last WAL of a recovered queue can be empty. 
> But, intermittent DFS issues may cause empty WALs being created (without the 
> PWAL magic), and a roll of WAL to happen without a regionserver crash. This 
> will cause recovered queues to have empty WALs in the middle. This cause 
> replication to get stuck:
> {code}
> TRACE regionserver.ReplicationSource: Opening log 
> WARN regionserver.ReplicationSource: - Got: 
> java.io.EOFException
>   at java.io.DataInputStream.readFully(DataInputStream.java:197)
>   at java.io.DataInputStream.readFully(DataInputStream.java:169)
>   at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1915)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.initialize(SequenceFile.java:1880)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1829)
>   at 
> org.apache.hadoop.io.SequenceFile$Reader.(SequenceFile.java:1843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.(SequenceFileLogReader.java:70)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.reset(SequenceFileLogReader.java:168)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.initReader(SequenceFileLogReader.java:177)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.init(ReaderBase.java:66)
>   at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:312)
>   at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:276)
>   at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:264)
>   at 
> org.apache.hadoop.hbase.wal.WALFactory.createReader(WALFactory.java:423)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationWALReaderManager.openReader(ReplicationWALReaderManager.java:70)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.openReader(ReplicationSource.java:830)
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource$ReplicationSourceWorkerThread.run(ReplicationSource.java:572)
> {code}
> The WAL in question was completely empty but there were other WALs in the 
> recovered queue which were newer and non-empty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-18004) getRegionLocations needs to be called once in ScannerCallableWithReplicas#call()

2017-06-15 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050805#comment-16050805
 ] 

huaxiang sun edited comment on HBASE-18004 at 6/15/17 5:21 PM:
---

Rebase based on the latest code and removed an unused import in HRegionServer 
to trigger the tests in Server module.


was (Author: huaxiang):
Rebase based on the latest code and removed an unused import in HRegionServer 
to trig the tests in Server module.

> getRegionLocations  needs to be called once in 
> ScannerCallableWithReplicas#call()
> -
>
> Key: HBASE-18004
> URL: https://issues.apache.org/jira/browse/HBASE-18004
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-18004-master-001.patch, 
> HBASE-18004-master-002.patch
>
>
> Look at this line,
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java#L145
> It calls getRegionLocations() to get the primary region's locations. It's 
> usage is to figure out table's region replications. Since table's region 
> replication wont be changed until the table is disabled. It is safe to cache 
> this region replication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18004) getRegionLocations needs to be called once in ScannerCallableWithReplicas#call()

2017-06-15 Thread huaxiang sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

huaxiang sun updated HBASE-18004:
-
Attachment: HBASE-18004-master-002.patch

Rebase based on the latest code and removed an unused import in HRegionServer 
to trig the tests in Server module.

> getRegionLocations  needs to be called once in 
> ScannerCallableWithReplicas#call()
> -
>
> Key: HBASE-18004
> URL: https://issues.apache.org/jira/browse/HBASE-18004
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
>Priority: Minor
> Attachments: HBASE-18004-master-001.patch, 
> HBASE-18004-master-002.patch
>
>
> Look at this line,
> https://github.com/apache/hbase/blob/master/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ScannerCallableWithReplicas.java#L145
> It calls getRegionLocations() to get the primary region's locations. It's 
> usage is to figure out table's region replications. Since table's region 
> replication wont be changed until the table is disabled. It is safe to cache 
> this region replication.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18213) Add documentation about the new async client

2017-06-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050794#comment-16050794
 ] 

ramkrishna.s.vasudevan commented on HBASE-18213:


I just read doc and it explains very clearly as how to use it. Great +1.

> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18213.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18213) Add documentation about the new async client

2017-06-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050791#comment-16050791
 ] 

stack commented on HBASE-18213:
---

And yes, you are correct w/ your version target. Makes sense.

> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18213.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18213) Add documentation about the new async client

2017-06-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050767#comment-16050767
 ] 

stack commented on HBASE-18213:
---

+1 Excellent.

> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18213.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16415) Replication in different namespace

2017-06-15 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050708#comment-16050708
 ] 

Guanghao Zhang commented on HBASE-16415:


I thought you don't need change the replication shipper thread. You only need 
to add a new ReplicationEndpoint and implement replicate() method. You can 
embed the redirections information there. Thanks.

> Replication in different namespace
> --
>
> Key: HBASE-16415
> URL: https://issues.apache.org/jira/browse/HBASE-16415
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Christian Guegi
>Assignee: Jan Kunigk
>
> It would be nice to replicate tables from one namespace to another namespace.
> Example:
> Master cluster, namespace=default, table=bar
> Slave cluster, namespace=dr, table=bar
> Replication happens in class ReplicationSink:
>   public void replicateEntries(List entries, final CellScanner 
> cells, ...){
> ...
> TableName table = 
> TableName.valueOf(entry.getKey().getTableName().toByteArray());
> ...
> addToHashMultiMap(rowMap, table, clusterIds, m);
> ...
> for (Entry> entry : 
> rowMap.entrySet()) {
>   batch(entry.getKey(), entry.getValue().values());
> }
>}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18213) Add documentation about the new async client

2017-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050692#comment-16050692
 ] 

Hadoop QA commented on HBASE-18213:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
28s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 14s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
24s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
29m 21s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha3. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 129m 11s 
{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 173m 28s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.coprocessor.TestRegionObserverInterface |
| Timed out junit tests | 
org.apache.hadoop.hbase.coprocessor.TestCoprocessorMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873115/HBASE-18213.patch |
| JIRA Issue | HBASE-18213 |
| Optional Tests |  asflicense  javac  javadoc  unit  |
| uname | Linux f746b957295a 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8b36da1 |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7192/artifact/patchprocess/patch-unit-root.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/7192/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7192/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7192/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18213.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18220) Compaction scanners need not reopen storefile scanners while trying to switch over from pread to stream

2017-06-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050473#comment-16050473
 ] 

Ted Yu commented on HBASE-18220:


lgtm

> Compaction scanners need not reopen storefile scanners while trying to switch 
> over from pread to stream
> ---
>
> Key: HBASE-18220
> URL: https://issues.apache.org/jira/browse/HBASE-18220
> Project: HBase
>  Issue Type: Sub-task
>  Components: Compaction
>Affects Versions: 2.0.0, 3.0.0, 2.0.0-alpha-1
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0, 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18220.patch
>
>
> We try switch over to stream scanner if we have read more than a certain 
> number of bytes. In case of compaction we already have stream based scanners 
> only and but on calling shipped() we try to again close and reopen the 
> scanners which is unwanted. 
> [~Apache9]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-16415) Replication in different namespace

2017-06-15 Thread Jan Kunigk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050466#comment-16050466
 ] 

Jan Kunigk edited comment on HBASE-16415 at 6/15/17 1:23 PM:
-

Hi Guanghao, thanks for your feedback.
Yes, I agree it would be better to use a new ReplicationEndpoint.

The ReplicationEndpoint is passed into the run() method of ReplicaitonSource:
{code}
// get the WALEntryFilter from ReplicationEndpoint and add it to default 
filters
ArrayList filters = Lists.newArrayList(
  (WALEntryFilter)new SystemTableWALEntryFilter());
WALEntryFilter filterFromEndpoint = 
this.replicationEndpoint.getWALEntryfilter();
{code}

then at the end of this method a new attempt for starting the shipper thread 
(i.e. ReplicationSourceWALReaderThread) is launched:
```
  tryStartNewShipperThread(walGroupId, queue);
```
tryStartNewShipperThread() is invoking startNewWALReaderThread, which returns a 
ReplicationSourceWALReaderThread and also applies all filters to it:
```
   ChainWALEntryFilter readerFilter = new ChainWALEntryFilter(filters);

ReplicationSourceWALReaderThread walReader = new 
ReplicationSourceWALReaderThread(manager,
replicationQueueInfo, queue, startPosition, fs, conf, readerFilter, 
metrics);
```
I agree, that embedding the redirections information into a new 
ReplicationEndpoint just like we do with the filters makes sense.
But, the inspection of the individual WAL entries would still have to occur in 
the Shipper Threads itself
(like
```
entry = filterEntry(entry);
entry = redirectEntry(entry);
```
)

Do you agree? Or am I missing something? 


was (Author: jan.kun...@gmail.com):
Hi Guanghao, thanks for your feedback.
Yes, I agree it would be better to use a new ReplicationEndpoint.

The ReplicationEndpoint is passed into the run() method of ReplicaitonSource:
```
// get the WALEntryFilter from ReplicationEndpoint and add it to default 
filters
ArrayList filters = Lists.newArrayList(
  (WALEntryFilter)new SystemTableWALEntryFilter());
WALEntryFilter filterFromEndpoint = 
this.replicationEndpoint.getWALEntryfilter();
```
then at the end of this method a new attempt for starting the shipper thread 
(i.e. ReplicationSourceWALReaderThread) is launched:
```
  tryStartNewShipperThread(walGroupId, queue);
```
tryStartNewShipperThread() is invoking startNewWALReaderThread, which returns a 
ReplicationSourceWALReaderThread and also applies all filters to it:
```
   ChainWALEntryFilter readerFilter = new ChainWALEntryFilter(filters);

ReplicationSourceWALReaderThread walReader = new 
ReplicationSourceWALReaderThread(manager,
replicationQueueInfo, queue, startPosition, fs, conf, readerFilter, 
metrics);
```
I agree, that embedding the redirections information into a new 
ReplicationEndpoint just like we do with the filters makes sense.
But, the inspection of the individual WAL entries would still have to occur in 
the Shipper Threads itself
(like
```
entry = filterEntry(entry);
entry = redirectEntry(entry);
```
)

Do you agree? Or am I missing something? 

> Replication in different namespace
> --
>
> Key: HBASE-16415
> URL: https://issues.apache.org/jira/browse/HBASE-16415
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Christian Guegi
>Assignee: Jan Kunigk
>
> It would be nice to replicate tables from one namespace to another namespace.
> Example:
> Master cluster, namespace=default, table=bar
> Slave cluster, namespace=dr, table=bar
> Replication happens in class ReplicationSink:
>   public void replicateEntries(List entries, final CellScanner 
> cells, ...){
> ...
> TableName table = 
> TableName.valueOf(entry.getKey().getTableName().toByteArray());
> ...
> addToHashMultiMap(rowMap, table, clusterIds, m);
> ...
> for (Entry> entry : 
> rowMap.entrySet()) {
>   batch(entry.getKey(), entry.getValue().values());
> }
>}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-16415) Replication in different namespace

2017-06-15 Thread Jan Kunigk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050466#comment-16050466
 ] 

Jan Kunigk edited comment on HBASE-16415 at 6/15/17 1:23 PM:
-

Hi Guanghao, thanks for your feedback.
Yes, I agree it would be better to use a new ReplicationEndpoint.

The ReplicationEndpoint is passed into the run() method of ReplicaitonSource:
{code}
// get the WALEntryFilter from ReplicationEndpoint and add it to default 
filters
ArrayList filters = Lists.newArrayList(
  (WALEntryFilter)new SystemTableWALEntryFilter());
WALEntryFilter filterFromEndpoint = 
this.replicationEndpoint.getWALEntryfilter();
{code}

then at the end of this method a new attempt for starting the shipper thread 
(i.e. ReplicationSourceWALReaderThread) is launched:
{code}
  tryStartNewShipperThread(walGroupId, queue);
{code}
tryStartNewShipperThread() is invoking startNewWALReaderThread, which returns a 
ReplicationSourceWALReaderThread and also applies all filters to it:
{code}
   ChainWALEntryFilter readerFilter = new ChainWALEntryFilter(filters);

ReplicationSourceWALReaderThread walReader = new 
ReplicationSourceWALReaderThread(manager,
replicationQueueInfo, queue, startPosition, fs, conf, readerFilter, 
metrics);
{code}
I agree, that embedding the redirections information into a new 
ReplicationEndpoint just like we do with the filters makes sense.
But, the inspection of the individual WAL entries would still have to occur in 
the Shipper Threads itself
(like
{code}
entry = filterEntry(entry);
entry = redirectEntry(entry);
{code}
)

Do you agree? Or am I missing something? 


was (Author: jan.kun...@gmail.com):
Hi Guanghao, thanks for your feedback.
Yes, I agree it would be better to use a new ReplicationEndpoint.

The ReplicationEndpoint is passed into the run() method of ReplicaitonSource:
{code}
// get the WALEntryFilter from ReplicationEndpoint and add it to default 
filters
ArrayList filters = Lists.newArrayList(
  (WALEntryFilter)new SystemTableWALEntryFilter());
WALEntryFilter filterFromEndpoint = 
this.replicationEndpoint.getWALEntryfilter();
{code}

then at the end of this method a new attempt for starting the shipper thread 
(i.e. ReplicationSourceWALReaderThread) is launched:
```
  tryStartNewShipperThread(walGroupId, queue);
```
tryStartNewShipperThread() is invoking startNewWALReaderThread, which returns a 
ReplicationSourceWALReaderThread and also applies all filters to it:
```
   ChainWALEntryFilter readerFilter = new ChainWALEntryFilter(filters);

ReplicationSourceWALReaderThread walReader = new 
ReplicationSourceWALReaderThread(manager,
replicationQueueInfo, queue, startPosition, fs, conf, readerFilter, 
metrics);
```
I agree, that embedding the redirections information into a new 
ReplicationEndpoint just like we do with the filters makes sense.
But, the inspection of the individual WAL entries would still have to occur in 
the Shipper Threads itself
(like
```
entry = filterEntry(entry);
entry = redirectEntry(entry);
```
)

Do you agree? Or am I missing something? 

> Replication in different namespace
> --
>
> Key: HBASE-16415
> URL: https://issues.apache.org/jira/browse/HBASE-16415
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Christian Guegi
>Assignee: Jan Kunigk
>
> It would be nice to replicate tables from one namespace to another namespace.
> Example:
> Master cluster, namespace=default, table=bar
> Slave cluster, namespace=dr, table=bar
> Replication happens in class ReplicationSink:
>   public void replicateEntries(List entries, final CellScanner 
> cells, ...){
> ...
> TableName table = 
> TableName.valueOf(entry.getKey().getTableName().toByteArray());
> ...
> addToHashMultiMap(rowMap, table, clusterIds, m);
> ...
> for (Entry> entry : 
> rowMap.entrySet()) {
>   batch(entry.getKey(), entry.getValue().values());
> }
>}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16415) Replication in different namespace

2017-06-15 Thread Jan Kunigk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050466#comment-16050466
 ] 

Jan Kunigk commented on HBASE-16415:


Hi Guanghao, thanks for your feedback.
Yes, I agree it would be better to use a new ReplicationEndpoint.

The ReplicationEndpoint is passed into the run() method of ReplicaitonSource:
```
// get the WALEntryFilter from ReplicationEndpoint and add it to default 
filters
ArrayList filters = Lists.newArrayList(
  (WALEntryFilter)new SystemTableWALEntryFilter());
WALEntryFilter filterFromEndpoint = 
this.replicationEndpoint.getWALEntryfilter();
```
then at the end of this method a new attempt for starting the shipper thread 
(i.e. ReplicationSourceWALReaderThread) is launched:
```
  tryStartNewShipperThread(walGroupId, queue);
```
tryStartNewShipperThread() is invoking startNewWALReaderThread, which returns a 
ReplicationSourceWALReaderThread and also applies all filters to it:
```
   ChainWALEntryFilter readerFilter = new ChainWALEntryFilter(filters);

ReplicationSourceWALReaderThread walReader = new 
ReplicationSourceWALReaderThread(manager,
replicationQueueInfo, queue, startPosition, fs, conf, readerFilter, 
metrics);
```
I agree, that embedding the redirections information into a new 
ReplicationEndpoint just like we do with the filters makes sense.
But, the inspection of the individual WAL entries would still have to occur in 
the Shipper Threads itself
(like
```
entry = filterEntry(entry);
entry = redirectEntry(entry);
```
)

Do you agree? Or am I missing something? 

> Replication in different namespace
> --
>
> Key: HBASE-16415
> URL: https://issues.apache.org/jira/browse/HBASE-16415
> Project: HBase
>  Issue Type: New Feature
>  Components: Replication
>Reporter: Christian Guegi
>Assignee: Jan Kunigk
>
> It would be nice to replicate tables from one namespace to another namespace.
> Example:
> Master cluster, namespace=default, table=bar
> Slave cluster, namespace=dr, table=bar
> Replication happens in class ReplicationSink:
>   public void replicateEntries(List entries, final CellScanner 
> cells, ...){
> ...
> TableName table = 
> TableName.valueOf(entry.getKey().getTableName().toByteArray());
> ...
> addToHashMultiMap(rowMap, table, clusterIds, m);
> ...
> for (Entry> entry : 
> rowMap.entrySet()) {
>   batch(entry.getKey(), entry.getValue().values());
> }
>}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18220) Compaction scanners need not reopen storefile scanners while trying to switch over from pread to stream

2017-06-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050460#comment-16050460
 ] 

Hadoop QA commented on HBASE-18220:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
55s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 46s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
31m 19s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha3. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 150m 20s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 196m 44s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.security.access.TestCoprocessorWhitelistMasterObserver |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:757bf37 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12873076/HBASE-18220.patch |
| JIRA Issue | HBASE-18220 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux e0b59e129242 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 8b36da1 |
| Default Java | 1.8.0_131 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7191/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/7191/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7191/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/7191/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Compaction scanners need not reopen storefile scanners while trying to switch 
> over from pread to stream
> 

[jira] [Commented] (HBASE-18010) Connect CellChunkMap to be used for flattening in CompactingMemStore

2017-06-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050455#comment-16050455
 ] 

Ted Yu commented on HBASE-18010:


Please add 'hbase' group to the review request.

> Connect CellChunkMap to be used for flattening in CompactingMemStore
> 
>
> Key: HBASE-18010
> URL: https://issues.apache.org/jira/browse/HBASE-18010
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-18010-V04.patch
>
>
> The CellChunkMap helps to create a new type of ImmutableSegment, where the 
> index (CellSet's delegatee) is going to be CellChunkMap. No big cells or 
> upserted cells are going to be supported here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18213) Add documentation about the new async client

2017-06-15 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18213:
--
Attachment: HBASE-18213.patch

Add documentation for async client. [~zghaobac] I've also mentioned 
{{AsyncAdmin}} to warn that it is still under development and the API may be 
changed in the future. You can remove these words when {{AsyncAdmin}} is 
completed done and add your documentation there.

Thanks.

> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18213.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18213) Add documentation about the new async client

2017-06-15 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18213:
--
Assignee: Duo Zhang
  Status: Patch Available  (was: Open)

> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18213.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18213) Add documentation about the new async client

2017-06-15 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18213:
--
Affects Version/s: 2.0.0-alpha-1
Fix Version/s: 2.0.0-alpha-2

> Add documentation about the new async client
> 
>
> Key: HBASE-18213
> URL: https://issues.apache.org/jira/browse/HBASE-18213
> Project: HBase
>  Issue Type: Sub-task
>  Components: asyncclient, Client, documentation
>Affects Versions: 3.0.0, 2.0.0-alpha-1
>Reporter: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18200) Set hadoop check versions for branch-2 and branch-2.x in pre commit

2017-06-15 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-18200:
--
Fix Version/s: 2.0.0-alpha-2

> Set hadoop check versions for branch-2 and branch-2.x in pre commit
> ---
>
> Key: HBASE-18200
> URL: https://issues.apache.org/jira/browse/HBASE-18200
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: HBASE-18200.patch
>
>
> Now it will use the hadoop versions for branch-1.
> I do not know how to set the fix versions as the code will be committed to 
> master but the branch in trouble is branch-2...



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18209) Include httpclient / httpcore jars in build artifacts

2017-06-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16050394#comment-16050394
 ] 

Hudson commented on HBASE-18209:


FAILURE: Integrated in Jenkins build HBase-2.0 #46 (See 
[https://builds.apache.org/job/HBase-2.0/46/])
HBASE-18209 Include httpclient / httpcore jars in build artifacts (tedyu: rev 
299850ea70bbb86e2d4c8ef0b1cead2b67e079d8)
* (edit) hbase-server/pom.xml
* (edit) hbase-assembly/pom.xml


> Include httpclient / httpcore jars in build artifacts
> -
>
> Key: HBASE-18209
> URL: https://issues.apache.org/jira/browse/HBASE-18209
> Project: HBase
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 3.0.0, 2.0.0-alpha-2
>
> Attachments: 18209.v1.txt, 18209.v2.txt
>
>
> We need httpclient & httpcore jars to be present when rootdir is placed on 
> s3(a).
> Attempts to move to the fully shaded amazon-SDK JAR caused problems of its 
> own. (according to [~steve_l])
> Here are the versions we should use:
> 4.5.2
> 4.4.4
> Currently they are declared test dependency.
> This JIRA is to move to compile time dependency so that the corresponding 
> jars are bundled in lib directory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >