[jira] [Updated] (HBASE-23337) Several modules missing in nexus for Apache HBase 2.2.2

2019-11-27 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-23337:

Status: Patch Available  (was: Open)

sure. here's my WIP. I haven't been able to test end-to-end yet because of 
getting the gpg agent to work in the docker container. but things work as 
expected when I run manually.

patch.v0
  - uses the nexus-staging-maven-plugin when the asf-release profile is active. 
this replaces the default deploy plugin and takes care of properly closing or 
dropping the staged repository depending on build status. Since it requires the 
paid version of nexus, it's associated with the asf specific release profile.
  - updates the release build to call deploy and parse out the staged repo id 
from the nexus-staging-maven-plugin rather than call the REST stuff ourselves.

I'm pretty sure there's some stuff for me to clean up in the release scripts 
still.

The pom changes will need to be on every branch.

> Several modules missing in nexus for Apache HBase 2.2.2
> ---
>
> Key: HBASE-23337
> URL: https://issues.apache.org/jira/browse/HBASE-23337
> Project: HBase
>  Issue Type: Bug
>  Components: build, community, scripts
>Affects Versions: 2.2.2
>Reporter: Chao
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HBASE-23337.0.patch
>
>
> The latest version of hbase-shaded-client is currently 2.2.1. It has been a 
> while since 2.2.2 release (2019/10/25). See: 
> [https://search.maven.org/search?q=hbase-shaded-client].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] chenxu14 commented on a change in pull request #868: HBASE-23296 Add CompositeBucketCache to support tiered BC

2019-11-27 Thread GitBox
chenxu14 commented on a change in pull request #868: HBASE-23296 Add 
CompositeBucketCache to support tiered BC
URL: https://github.com/apache/hbase/pull/868#discussion_r351621049
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CompositeBucketCache.java
 ##
 @@ -0,0 +1,40 @@
+/**
+ * Copyright The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache;
+import org.apache.yetus.audience.InterfaceAudience;
+
+@InterfaceAudience.Private
+public class CompositeBucketCache extends CompositeBlockCache {
+  public static final String IOENGINE_L1 = "hbase.bucketcache.l1.ioengine";
+  public static final String IOENGINE_L2 = "hbase.bucketcache.l2.ioengine";
+  public static final String CACHESIZE_L1 = "hbase.bucketcache.l1.size";
+  public static final String CACHESIZE_L2 = "hbase.bucketcache.l2.size";
+  public static final String WRITER_THREADS_L1 = 
"hbase.bucketcache.l1.writer.threads";
+  public static final String WRITER_THREADS_L2 = 
"hbase.bucketcache.l2.writer.threads";
+  public static final String WRITER_QUEUE_LENGTH_L1 = 
"hbase.bucketcache.l1.writer.queuelength";
+  public static final String WRITER_QUEUE_LENGTH_L2 = 
"hbase.bucketcache.l2.writer.queuelength";
+  public static final String PERSISTENT_PATH_L1 = 
"hbase.bucketcache.l1.persistent.path";
+  public static final String PERSISTENT_PATH_L2 = 
"hbase.bucketcache.l2.persistent.path";
+
+  public CompositeBucketCache(BucketCache l1Cache, BucketCache l2Cache) {
+super(l1Cache, l2Cache);
 
 Review comment:
   yes, maybe i have a misunderstood, will do this later


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23337) Several modules missing in nexus for Apache HBase 2.2.2

2019-11-27 Thread Sean Busbey (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-23337:

Attachment: HBASE-23337.0.patch

> Several modules missing in nexus for Apache HBase 2.2.2
> ---
>
> Key: HBASE-23337
> URL: https://issues.apache.org/jira/browse/HBASE-23337
> Project: HBase
>  Issue Type: Bug
>  Components: build, community, scripts
>Affects Versions: 2.2.2
>Reporter: Chao
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HBASE-23337.0.patch
>
>
> The latest version of hbase-shaded-client is currently 2.2.1. It has been a 
> while since 2.2.2 release (2019/10/25). See: 
> [https://search.maven.org/search?q=hbase-shaded-client].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-22749) Distributed MOB compactions

2019-11-27 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984187#comment-16984187
 ] 

HBase QA commented on HBASE-22749:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 18 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 1s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  4m 
30s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
28s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
36s{color} | {color:red} hbase-server: The patch generated 78 new + 326 
unchanged - 47 fixed = 404 total (was 373) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
15s{color} | {color:red} hbase-it: The patch generated 2 new + 0 unchanged - 0 
fixed = 2 total (was 0) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 43 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 1s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
17m 17s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  4m 
51s{color} | {color:red} hbase-server generated 2 new + 0 unchanged - 0 fixed = 
2 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}226m 11s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
4s{color} | {color:green} hbase-it in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}292m  3s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  Possible null pointer dereference of mobRefData in 
org.apache.hadoop.hbase.master.MobFileCleanerChore.cleanupObsoleteMobFiles(C

[jira] [Commented] (HBASE-22514) Move rsgroup feature into core of HBase

2019-11-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984185#comment-16984185
 ] 

Hudson commented on HBASE-22514:


Results for branch HBASE-22514
[build #193 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/193/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/193//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/193//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-22514/193//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Move rsgroup feature into core of HBase
> ---
>
> Key: HBASE-22514
> URL: https://issues.apache.org/jira/browse/HBASE-22514
> Project: HBase
>  Issue Type: Umbrella
>  Components: Admin, Client, rsgroup
>Reporter: Yechao Chen
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-22514.master.001.patch, 
> image-2019-05-31-18-25-38-217.png
>
>
> The class RSGroupAdminClient is not public 
> we need to use java api  RSGroupAdminClient  to manager RSG 
> so  RSGroupAdminClient should be public
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ddupg commented on a change in pull request #881: HBASE-23345 Table need to replication unless all of cfs are excluded

2019-11-27 Thread GitBox
ddupg commented on a change in pull request #881: HBASE-23345 Table need to 
replication unless all of cfs are excluded
URL: https://github.com/apache/hbase/pull/881#discussion_r351616784
 
 

 ##
 File path: 
hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
 ##
 @@ -48,4 +64,26 @@ public void testClassMethodsAreBuilderStyle() {
 
 BuilderStyleTest.assertClassesAreBuilderStyle(ReplicationPeerConfig.class);
   }
+
+  @Test
+  public void testNeedToReplication() {
 
 Review comment:
   ok


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #832: HBASE-23298 Refactor LogRecoveredEditsOutputSink and BoundedLogWriter…

2019-11-27 Thread GitBox
Apache9 commented on a change in pull request #832: HBASE-23298 Refactor 
LogRecoveredEditsOutputSink and BoundedLogWriter…
URL: https://github.com/apache/hbase/pull/832#discussion_r351612877
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/OutputSink.java
 ##
 @@ -41,39 +42,36 @@
  * ways of consuming recovered edits.
  */
 @InterfaceAudience.Private
-public abstract class OutputSink {
+abstract class OutputSink {
   private static final Logger LOG = LoggerFactory.getLogger(OutputSink.class);
 
-  protected WALSplitter.PipelineController controller;
+  private WALSplitter.PipelineController controller;
   protected EntryBuffers entryBuffers;
 
 Review comment:
   Can be final?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #832: HBASE-23298 Refactor LogRecoveredEditsOutputSink and BoundedLogWriter…

2019-11-27 Thread GitBox
Apache9 commented on a change in pull request #832: HBASE-23298 Refactor 
LogRecoveredEditsOutputSink and BoundedLogWriter…
URL: https://github.com/apache/hbase/pull/832#discussion_r351612992
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/OutputSink.java
 ##
 @@ -41,39 +42,36 @@
  * ways of consuming recovered edits.
  */
 @InterfaceAudience.Private
-public abstract class OutputSink {
+abstract class OutputSink {
   private static final Logger LOG = LoggerFactory.getLogger(OutputSink.class);
 
-  protected WALSplitter.PipelineController controller;
+  private WALSplitter.PipelineController controller;
   protected EntryBuffers entryBuffers;
 
-  protected ConcurrentHashMap writers = new 
ConcurrentHashMap<>();
-  protected final ConcurrentHashMap regionMaximumEditLogSeqNum =
-  new ConcurrentHashMap<>();
-
-  protected final List writerThreads = Lists.newArrayList();
-
-  /* Set of regions which we've decided should not output edits */
-  protected final Set blacklistedRegions =
-  Collections.synchronizedSet(new TreeSet<>(Bytes.BYTES_COMPARATOR));
-
-  protected boolean closeAndCleanCompleted = false;
-
-  protected boolean writersClosed = false;
+  private final List writerThreads = Lists.newArrayList();
 
   protected final int numThreads;
 
   protected CancelableProgressable reporter = null;
 
-  protected AtomicLong skippedEdits = new AtomicLong();
+  protected AtomicLong totalSkippedEdits = new AtomicLong();
 
 Review comment:
   Ditto.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #832: HBASE-23298 Refactor LogRecoveredEditsOutputSink and BoundedLogWriter…

2019-11-27 Thread GitBox
Apache9 commented on a change in pull request #832: HBASE-23298 Refactor 
LogRecoveredEditsOutputSink and BoundedLogWriter…
URL: https://github.com/apache/hbase/pull/832#discussion_r351612163
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/EntryBuffers.java
 ##
 @@ -138,21 +129,56 @@ void doneWriting(RegionEntryBuffer buffer) {
 }
   }
 
+  @VisibleForTesting
   synchronized boolean isRegionCurrentlyWriting(byte[] region) {
 return currentlyWriting.contains(region);
   }
 
-  public void waitUntilDrained() {
-synchronized (controller.dataAvailable) {
-  while (totalBuffered > 0) {
-try {
-  controller.dataAvailable.wait(2000);
-} catch (InterruptedException e) {
-  LOG.warn("Got interrupted while waiting for EntryBuffers is 
drained");
-  Thread.interrupted();
-  break;
-}
-  }
+  /**
+   * A buffer of some number of edits for a given region.
+   * This accumulates edits and also provides a memory optimization in order to
+   * share a single byte array instance for the table and region name.
+   * Also tracks memory usage of the accumulated edits.
+   */
+  static class RegionEntryBuffer implements HeapSize {
+private long heapInBuffer = 0;
+final List entryBuffer;
+final TableName tableName;
+final byte[] encodedRegionName;
+
+RegionEntryBuffer(TableName tableName, byte[] region) {
+  this.tableName = tableName;
+  this.encodedRegionName = region;
+  this.entryBuffer = new ArrayList<>();
+}
+
+long appendEntry(WAL.Entry entry) {
+  internify(entry);
+  entryBuffer.add(entry);
+  long incrHeap = entry.getEdit().heapSize() +
+  ClassSize.align(2 * ClassSize.REFERENCE) + // WALKey pointers
+  0; // TODO linkedlist entry
 
 Review comment:
   + 0?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-23298) Refactor LogRecoveredEditsOutputSink and BoundedLogWriterCreationOutputSink

2019-11-27 Thread Guanghao Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-23298:
---
Fix Version/s: 2.3.0
   3.0.0

> Refactor LogRecoveredEditsOutputSink and BoundedLogWriterCreationOutputSink
> ---
>
> Key: HBASE-23298
> URL: https://issues.apache.org/jira/browse/HBASE-23298
> Project: HBase
>  Issue Type: Improvement
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> I do some refactor work in HBASE-23286. Move them to a new issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] infraio commented on a change in pull request #881: HBASE-23345 Table need to replication unless all of cfs are excluded

2019-11-27 Thread GitBox
infraio commented on a change in pull request #881: HBASE-23345 Table need to 
replication unless all of cfs are excluded
URL: https://github.com/apache/hbase/pull/881#discussion_r351610939
 
 

 ##
 File path: 
hbase-client/src/test/java/org/apache/hadoop/hbase/replication/TestReplicationPeerConfig.java
 ##
 @@ -48,4 +64,26 @@ public void testClassMethodsAreBuilderStyle() {
 
 BuilderStyleTest.assertClassesAreBuilderStyle(ReplicationPeerConfig.class);
   }
+
+  @Test
+  public void testNeedToReplication() {
 
 Review comment:
   testNeedToReplicate? Add more test cases?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache9 commented on a change in pull request #832: HBASE-23298 Refactor LogRecoveredEditsOutputSink and BoundedLogWriter…

2019-11-27 Thread GitBox
Apache9 commented on a change in pull request #832: HBASE-23298 Refactor 
LogRecoveredEditsOutputSink and BoundedLogWriter…
URL: https://github.com/apache/hbase/pull/832#discussion_r351609318
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractRecoveredEditsOutputSink.java
 ##
 @@ -0,0 +1,283 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.wal;
+
+import static 
org.apache.hadoop.hbase.wal.WALSplitUtil.getCompletedRecoveredEditsFilePath;
+import static org.apache.hadoop.hbase.wal.WALSplitUtil.getRegionSplitEditsPath;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.log.HBaseMarkers;
 
 Review comment:
   I think this is for logging, literally... Not for WAL. It is used in our 
slf4j logging system?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23340) hmaster /hbase/replication/rs session expired (hbase replication default value is true, we don't use ) causes logcleaner can not clean oldWALs, which resulits in old

2019-11-27 Thread jackylau (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984162#comment-16984162
 ] 

jackylau commented on HBASE-23340:
--

Does anyone konw how to resolve it by a good way

> hmaster  /hbase/replication/rs  session expired (hbase replication default 
> value is true, we don't use ) causes logcleaner can not clean oldWALs, which 
> resulits in oldWALs too large (more than 2TB)
> -
>
> Key: HBASE-23340
> URL: https://issues.apache.org/jira/browse/HBASE-23340
> Project: HBase
>  Issue Type: Improvement
>  Components: master
>Reporter: jackylau
>Assignee: jackylau
>Priority: Major
> Fix For: master
>
> Attachments: Snipaste_2019-11-21_10-39-25.png, 
> Snipaste_2019-11-21_14-10-36.png
>
>
> hmaster /hbase/replication/rs session expired (hbase replication default 
> value is true, we don't use ) causes logcleaner can not clean oldWALs, which 
> resulits in oldWALs too large (more than 2TB).
> !Snipaste_2019-11-21_10-39-25.png!
>  
> !Snipaste_2019-11-21_14-10-36.png!
>  
> we can solve it by following :
> 1) increase the session timeout(but i think it is not a good idea. because we 
> do not know how long to set is suitable)
> 2) close the hbase replication. It is not a good idea too, when our user uses 
> this feature
> 3) we need add retry times, for example when it has already happened three 
> times, we set the ReplicationLogCleaner and SnapShotCleaner stop
> that is all my ideas, i do not konw it is suitable, If it is suitable, could 
> i commit a PR?
> Does anynode have a good idea.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #832: HBASE-23298 Refactor LogRecoveredEditsOutputSink and BoundedLogWriter…

2019-11-27 Thread GitBox
Apache-HBase commented on issue #832: HBASE-23298 Refactor 
LogRecoveredEditsOutputSink and BoundedLogWriter…
URL: https://github.com/apache/hbase/pull/832#issuecomment-559350782
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 31s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 31s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 18s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 16s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 57s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 17s |  hbase-server: The patch generated 4 
new + 31 unchanged - 0 fixed = 35 total (was 31)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 34s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 42s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   4m 12s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 161m  1s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 35s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 218m 24s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-832/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/832 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 3c41368c90e9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-832/out/precommit/personality/provided.sh
 |
   | git revision | master / 636fa2c6b0 |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-832/8/artifact/out/diff-checkstyle-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-832/8/testReport/
 |
   | Max. process+thread count | 4454 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-832/8/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23066) Allow cache on write during compactions when prefetching is enabled

2019-11-27 Thread ramkrishna.s.vasudevan (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984157#comment-16984157
 ] 

ramkrishna.s.vasudevan commented on HBASE-23066:


[~jacob.leblanc]
You want to do the [~javaman_chen] angle of adding threshold based on some size 
based config - in this JIRA or another one? 
I can do that in another JIRA if you are busy with other things.
[~javaman_chen], [~anoop.hbase] - FYI.


> Allow cache on write during compactions when prefetching is enabled
> ---
>
> Key: HBASE-23066
> URL: https://issues.apache.org/jira/browse/HBASE-23066
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, regionserver
>Affects Versions: 1.4.10
>Reporter: Jacob LeBlanc
>Assignee: Jacob LeBlanc
>Priority: Minor
> Fix For: 2.3.0, 1.6.0
>
> Attachments: HBASE-23066.patch, performance_results.png, 
> prefetchCompactedBlocksOnWrite.patch
>
>
> In cases where users care a lot about read performance for tables that are 
> small enough to fit into a cache (or the cache is large enough), 
> prefetchOnOpen can be enabled to make the entire table available in cache 
> after the initial region opening is completed. Any new data can also be 
> guaranteed to be in cache with the cacheBlocksOnWrite setting.
> However, the missing piece is when all blocks are evicted after a compaction. 
> We found very poor performance after compactions for tables under heavy read 
> load and a slower backing filesystem (S3). After a compaction the prefetching 
> threads need to compete with threads servicing read requests and get 
> constantly blocked as a result. 
> This is a proposal to introduce a new cache configuration option that would 
> cache blocks on write during compaction for any column family that has 
> prefetch enabled. This would virtually guarantee all blocks are kept in cache 
> after the initial prefetch on open is completed allowing for guaranteed 
> steady read performance despite a slow backing file system.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23066) Allow cache on write during compactions when prefetching is enabled

2019-11-27 Thread chenxu (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984145#comment-16984145
 ] 

chenxu commented on HBASE-23066:


bq. On a side note, (Not related to this issue) when we have cache on write ON 
as well as prefetch also On, do we do the caching part for the flushed files 
twice? When it is written, its already been added to cache. Later as part of 
HFile reader open, the prefetch threads will again do a read and add to cache!

Has thought about this recently, maybe there need a conf key to decide which 
operations(flush, compaction, regionOpen, bulkload) can trigger the prefetch. 
In this way, if cacheonwrite is enabled, we can exclude the flush and 
compaction operations when doing prefetch, I can do this in another JIRA.

> Allow cache on write during compactions when prefetching is enabled
> ---
>
> Key: HBASE-23066
> URL: https://issues.apache.org/jira/browse/HBASE-23066
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, regionserver
>Affects Versions: 1.4.10
>Reporter: Jacob LeBlanc
>Assignee: Jacob LeBlanc
>Priority: Minor
> Fix For: 2.3.0, 1.6.0
>
> Attachments: HBASE-23066.patch, performance_results.png, 
> prefetchCompactedBlocksOnWrite.patch
>
>
> In cases where users care a lot about read performance for tables that are 
> small enough to fit into a cache (or the cache is large enough), 
> prefetchOnOpen can be enabled to make the entire table available in cache 
> after the initial region opening is completed. Any new data can also be 
> guaranteed to be in cache with the cacheBlocksOnWrite setting.
> However, the missing piece is when all blocks are evicted after a compaction. 
> We found very poor performance after compactions for tables under heavy read 
> load and a slower backing filesystem (S3). After a compaction the prefetching 
> threads need to compete with threads servicing read requests and get 
> constantly blocked as a result. 
> This is a proposal to introduce a new cache configuration option that would 
> cache blocks on write during compaction for any column family that has 
> prefetch enabled. This would virtually guarantee all blocks are kept in cache 
> after the initial prefetch on open is completed allowing for guaranteed 
> steady read performance despite a slow backing file system.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23066) Allow cache on write during compactions when prefetching is enabled

2019-11-27 Thread ramkrishna.s.vasudevan (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984143#comment-16984143
 ] 

ramkrishna.s.vasudevan commented on HBASE-23066:


bq.On a side note, (Not related to this issue) when we have cache on write ON 
as well as prefetch also On, do we do the caching part for the flushed files 
twice? When it is written, its already been added to cache. Later as part of 
HFile reader open, the prefetch threads will again do a read and add to cache!
I checked this part. Seems we just read the block and if it is from cache we 
just return it. Because HfileReaderImpl#readBlock() just return if the block is 
already cached.

bq.The comment from @chenxu seems valid. Should we see that angle also?
Ok. We can see that but it is part of this JIRA or should we raise another JIRA 
and address it. 

> Allow cache on write during compactions when prefetching is enabled
> ---
>
> Key: HBASE-23066
> URL: https://issues.apache.org/jira/browse/HBASE-23066
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, regionserver
>Affects Versions: 1.4.10
>Reporter: Jacob LeBlanc
>Assignee: Jacob LeBlanc
>Priority: Minor
> Fix For: 2.3.0, 1.6.0
>
> Attachments: HBASE-23066.patch, performance_results.png, 
> prefetchCompactedBlocksOnWrite.patch
>
>
> In cases where users care a lot about read performance for tables that are 
> small enough to fit into a cache (or the cache is large enough), 
> prefetchOnOpen can be enabled to make the entire table available in cache 
> after the initial region opening is completed. Any new data can also be 
> guaranteed to be in cache with the cacheBlocksOnWrite setting.
> However, the missing piece is when all blocks are evicted after a compaction. 
> We found very poor performance after compactions for tables under heavy read 
> load and a slower backing filesystem (S3). After a compaction the prefetching 
> threads need to compete with threads servicing read requests and get 
> constantly blocked as a result. 
> This is a proposal to introduce a new cache configuration option that would 
> cache blocks on write during compaction for any column family that has 
> prefetch enabled. This would virtually guarantee all blocks are kept in cache 
> after the initial prefetch on open is completed allowing for guaranteed 
> steady read performance despite a slow backing file system.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23066) Allow cache on write during compactions when prefetching is enabled

2019-11-27 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984140#comment-16984140
 ] 

Anoop Sam John commented on HBASE-23066:


Right now if the prefetch is turned on, we will do the data prefetch as part of 
the HFile open. Once a compaction is over and committed, the new file reader 
gets opened and we will give the prefetch job to the prefetch thread pool. 
There are by default 4 threads only here. So this is not so aggressive prefetch 
we can see. Also it avoids the need to do one extra HFile read for the caching 
by the prefetch thread.
With this new config, what we do is write to cache along with the HFile create 
itself. Blocks are added to cache as and when it is written to the HFile. So 
its aggressive. Ya it helps to make the new File data available from time 0 
itself. The concern is this in a way demands 2x cache size. Because the 
compacting files data might be already there in the cache.  While the new file 
write, those old files are still valid. The new one is not even committed by 
the RS.  The size concern is big when it is a major compaction!  The comment 
from @chenxu seems valid. Should we see that angle also?

On a side note, (Not related to this issue) when we have cache on write ON as 
well as prefetch also On, do we do the caching part for the flushed files 
twice? When it is written, its already been added to cache. Later as part of 
HFile reader open, the prefetch threads will again do a read and add to cache!


> Allow cache on write during compactions when prefetching is enabled
> ---
>
> Key: HBASE-23066
> URL: https://issues.apache.org/jira/browse/HBASE-23066
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction, regionserver
>Affects Versions: 1.4.10
>Reporter: Jacob LeBlanc
>Assignee: Jacob LeBlanc
>Priority: Minor
> Fix For: 2.3.0, 1.6.0
>
> Attachments: HBASE-23066.patch, performance_results.png, 
> prefetchCompactedBlocksOnWrite.patch
>
>
> In cases where users care a lot about read performance for tables that are 
> small enough to fit into a cache (or the cache is large enough), 
> prefetchOnOpen can be enabled to make the entire table available in cache 
> after the initial region opening is completed. Any new data can also be 
> guaranteed to be in cache with the cacheBlocksOnWrite setting.
> However, the missing piece is when all blocks are evicted after a compaction. 
> We found very poor performance after compactions for tables under heavy read 
> load and a slower backing filesystem (S3). After a compaction the prefetching 
> threads need to compete with threads servicing read requests and get 
> constantly blocked as a result. 
> This is a proposal to introduce a new cache configuration option that would 
> cache blocks on write during compaction for any column family that has 
> prefetch enabled. This would virtually guarantee all blocks are kept in cache 
> after the initial prefetch on open is completed allowing for guaranteed 
> steady read performance despite a slow backing file system.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ramkrish86 commented on a change in pull request #868: HBASE-23296 Add CompositeBucketCache to support tiered BC

2019-11-27 Thread GitBox
ramkrish86 commented on a change in pull request #868: HBASE-23296 Add 
CompositeBucketCache to support tiered BC
URL: https://github.com/apache/hbase/pull/868#discussion_r351425671
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CompositeBucketCache.java
 ##
 @@ -0,0 +1,40 @@
+/**
+ * Copyright The Apache Software Foundation
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.hadoop.hbase.io.hfile;
+
+import org.apache.hadoop.hbase.io.hfile.bucket.BucketCache;
+import org.apache.yetus.audience.InterfaceAudience;
+
+@InterfaceAudience.Private
+public class CompositeBucketCache extends CompositeBlockCache {
+  public static final String IOENGINE_L1 = "hbase.bucketcache.l1.ioengine";
+  public static final String IOENGINE_L2 = "hbase.bucketcache.l2.ioengine";
+  public static final String CACHESIZE_L1 = "hbase.bucketcache.l1.size";
+  public static final String CACHESIZE_L2 = "hbase.bucketcache.l2.size";
+  public static final String WRITER_THREADS_L1 = 
"hbase.bucketcache.l1.writer.threads";
+  public static final String WRITER_THREADS_L2 = 
"hbase.bucketcache.l2.writer.threads";
+  public static final String WRITER_QUEUE_LENGTH_L1 = 
"hbase.bucketcache.l1.writer.queuelength";
+  public static final String WRITER_QUEUE_LENGTH_L2 = 
"hbase.bucketcache.l2.writer.queuelength";
+  public static final String PERSISTENT_PATH_L1 = 
"hbase.bucketcache.l1.persistent.path";
+  public static final String PERSISTENT_PATH_L2 = 
"hbase.bucketcache.l2.persistent.path";
+
+  public CompositeBucketCache(BucketCache l1Cache, BucketCache l2Cache) {
+super(l1Cache, l2Cache);
 
 Review comment:
   When the L1\2 Bucket cache  emits the statistics - you may have to log it 
with the the level of the cache info also? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] chenxu14 commented on a change in pull request #868: HBASE-23296 Add CompositeBucketCache to support tiered BC

2019-11-27 Thread GitBox
chenxu14 commented on a change in pull request #868: HBASE-23296 Add 
CompositeBucketCache to support tiered BC
URL: https://github.com/apache/hbase/pull/868#discussion_r351576686
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
 ##
 @@ -200,15 +209,39 @@ private static BlockCache 
createExternalBlockcache(Configuration c) {
 
   }
 
-  private static BucketCache createBucketCache(Configuration c) {
-// Check for L2.  ioengine name must be non-null.
-String bucketCacheIOEngineName = c.get(BUCKET_CACHE_IOENGINE_KEY, null);
+  private static BucketCache createBucketCache(Configuration c, CacheLevel 
level) {
+// Check for ioengine name must be non-null.
+String bucketCacheIOEngineName;
+int writerThreads;
+int writerQueueLen;
+String persistentPath;
+switch(level) {
+  case L1:
+bucketCacheIOEngineName = c.get(CompositeBucketCache.IOENGINE_L1, 
null);
+writerThreads = c.getInt(CompositeBucketCache.WRITER_THREADS_L1,
+DEFAULT_BUCKET_CACHE_WRITER_THREADS);
+writerQueueLen = c.getInt(CompositeBucketCache.WRITER_QUEUE_LENGTH_L1,
+DEFAULT_BUCKET_CACHE_WRITER_QUEUE);
+persistentPath = c.get(CompositeBucketCache.PERSISTENT_PATH_L1);
+break;
+  case L2:
+  default:
 
 Review comment:
   Add some log like this:
   LOG.info("Creating BucketCache for {}, ioengine : {}, cacheSize {}.", level, 
bucketCacheIOEngineName, bucketCacheSize);


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (HBASE-23293) [REPLICATION] make ship edits timeout configurable

2019-11-27 Thread Guangxu Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guangxu Cheng resolved HBASE-23293.
---
Fix Version/s: 2.2.3
   2.3.0
   3.0.0
   Resolution: Fixed

> [REPLICATION] make ship edits timeout configurable
> --
>
> Key: HBASE-23293
> URL: https://issues.apache.org/jira/browse/HBASE-23293
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: chenxu
>Assignee: chenxu
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> ReplicationSourceShipper#shipEdits may take a while if bulkload replication 
> enabled, since we should copy HFile from the source cluster, so i think there 
> is a need to make the timeout value configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23293) [REPLICATION] make ship edits timeout configurable

2019-11-27 Thread Guangxu Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984112#comment-16984112
 ] 

Guangxu Cheng commented on HBASE-23293:
---

Pushed to branch-2 and branch-2.2. Thank [~javaman_chen] for your contributing.

> [REPLICATION] make ship edits timeout configurable
> --
>
> Key: HBASE-23293
> URL: https://issues.apache.org/jira/browse/HBASE-23293
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: chenxu
>Assignee: chenxu
>Priority: Minor
>
> ReplicationSourceShipper#shipEdits may take a while if bulkload replication 
> enabled, since we should copy HFile from the source cluster, so i think there 
> is a need to make the timeout value configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] guangxuCheng merged pull request #882: HBASE-23293 [REPLICATION] make ship edits timeout configurable

2019-11-27 Thread GitBox
guangxuCheng merged pull request #882: HBASE-23293 [REPLICATION] make ship 
edits timeout configurable
URL: https://github.com/apache/hbase/pull/882
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] chenxu14 commented on a change in pull request #868: HBASE-23296 Add CompositeBucketCache to support tiered BC

2019-11-27 Thread GitBox
chenxu14 commented on a change in pull request #868: HBASE-23296 Add 
CompositeBucketCache to support tiered BC
URL: https://github.com/apache/hbase/pull/868#discussion_r351573623
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
 ##
 @@ -110,29 +113,35 @@ public static BlockCache createBlockCache(Configuration 
conf) {
   + "we will remove the deprecated config.", 
DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY,
 BLOCKCACHE_BLOCKSIZE_KEY);
 }
-FirstLevelBlockCache l1Cache = createFirstLevelCache(conf);
+BlockCache l1Cache = createFirstLevelCache(conf);
 if (l1Cache == null) {
   return null;
 }
-boolean useExternal = conf.getBoolean(EXTERNAL_BLOCKCACHE_KEY, 
EXTERNAL_BLOCKCACHE_DEFAULT);
-if (useExternal) {
-  BlockCache l2CacheInstance = createExternalBlockcache(conf);
-  return l2CacheInstance == null ?
-  l1Cache :
-  new InclusiveCombinedBlockCache(l1Cache, l2CacheInstance);
+if (conf.getBoolean(EXTERNAL_BLOCKCACHE_KEY, EXTERNAL_BLOCKCACHE_DEFAULT)) 
{
+  BlockCache l2Cache = createExternalBlockcache(conf);
+  return l2Cache == null ? l1Cache : new InclusiveCombinedBlockCache(
+  (FirstLevelBlockCache)l1Cache, l2Cache);
 } else {
   // otherwise use the bucket cache.
-  BucketCache bucketCache = createBucketCache(conf);
-  if (!conf.getBoolean("hbase.bucketcache.combinedcache.enabled", true)) {
-// Non combined mode is off from 2.0
-LOG.warn(
-"From HBase 2.0 onwards only combined mode of LRU cache and bucket 
cache is available");
+  BucketCache l2Cache = createBucketCache(conf, CacheLevel.L2);
+  if (conf.getBoolean(BUCKET_CACHE_COMPOSITE_KEY, false)) {
+return l2Cache == null ? l1Cache : new 
CompositeBucketCache((BucketCache)l1Cache, l2Cache);
+  } else {
+if (!conf.getBoolean("hbase.bucketcache.combinedcache.enabled", true)) 
{
+  // Non combined mode is off from 2.0
+  LOG.warn("From HBase 2.0 onwards only combined mode of LRU cache and 
bucket"
+  + " cache is available");
+}
+return l2Cache == null ? l1Cache : new CombinedBlockCache(
+(FirstLevelBlockCache)l1Cache, l2Cache);
   }
-  return bucketCache == null ? l1Cache : new CombinedBlockCache(l1Cache, 
bucketCache);
 }
   }
 
-  private static FirstLevelBlockCache createFirstLevelCache(final 
Configuration c) {
+  private static BlockCache createFirstLevelCache(final Configuration c) {
+if (c.getBoolean(BUCKET_CACHE_COMPOSITE_KEY, false)) {
+  return createBucketCache(c, CacheLevel.L1);
 
 Review comment:
   > The bucket cache's hash map was considered to take significant space and 
was optimized by some JIRA by @anoopsjohn
   
   That's great! is there any JIRA to track with? looking forward on it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23337) Several modules missing in nexus for Apache HBase 2.2.2

2019-11-27 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984085#comment-16984085
 ] 

Duo Zhang commented on HBASE-23337:
---

[~busbey] Can you post the on-going patch here? I could try finishing it or at 
least make it work for a RM. Click close manually on 
https://repository.apache.org/ is not a big deal, can be improved later.

> Several modules missing in nexus for Apache HBase 2.2.2
> ---
>
> Key: HBASE-23337
> URL: https://issues.apache.org/jira/browse/HBASE-23337
> Project: HBase
>  Issue Type: Bug
>  Components: build, community, scripts
>Affects Versions: 2.2.2
>Reporter: Chao
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 3.0.0
>
>
> The latest version of hbase-shaded-client is currently 2.2.1. It has been a 
> while since 2.2.2 release (2019/10/25). See: 
> [https://search.maven.org/search?q=hbase-shaded-client].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-22749) Distributed MOB compactions

2019-11-27 Thread Vladimir Rodionov (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-22749:
--
Status: Patch Available  (was: Open)

> Distributed MOB compactions 
> 
>
> Key: HBASE-22749
> URL: https://issues.apache.org/jira/browse/HBASE-22749
> Project: HBase
>  Issue Type: New Feature
>  Components: mob
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-22749-branch-2.2-v4.patch, 
> HBASE-22749-master-v1.patch, HBASE-22749-master-v2.patch, 
> HBASE-22749-master-v3.patch, HBASE-22749-master-v4.patch, 
> HBase-MOB-2.0-v3.0.pdf
>
>
> There are several  drawbacks in the original MOB 1.0  (Moderate Object 
> Storage) implementation, which can limit the adoption of the MOB feature:  
> # MOB compactions are executed in a Master as a chore, which limits 
> scalability because all I/O goes through a single HBase Master server. 
> # Yarn/Mapreduce framework is required to run MOB compactions in a scalable 
> way, but this won’t work in a stand-alone HBase cluster.
> # Two separate compactors for MOB and for regular store files and their 
> interactions can result in a data loss (see HBASE-22075)
> The design goals for MOB 2.0 were to provide 100% MOB 1.0 - compatible 
> implementation, which is free of the above drawbacks and can be used as a 
> drop in replacement in existing MOB deployments. So, these are design goals 
> of a MOB 2.0:
> # Make MOB compactions scalable without relying on Yarn/Mapreduce framework
> # Provide unified compactor for both MOB and regular store files
> # Make it more robust especially w.r.t. to data losses. 
> # Simplify and reduce the overall MOB code.
> # Provide 100% compatible implementation with MOB 1.0.
> # No migration of data should be required between MOB 1.0 and MOB 2.0 - just 
> software upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-22749) Distributed MOB compactions

2019-11-27 Thread Vladimir Rodionov (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-22749:
--
Attachment: HBASE-22749-master-v4.patch

> Distributed MOB compactions 
> 
>
> Key: HBASE-22749
> URL: https://issues.apache.org/jira/browse/HBASE-22749
> Project: HBase
>  Issue Type: New Feature
>  Components: mob
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-22749-branch-2.2-v4.patch, 
> HBASE-22749-master-v1.patch, HBASE-22749-master-v2.patch, 
> HBASE-22749-master-v3.patch, HBASE-22749-master-v4.patch, 
> HBase-MOB-2.0-v3.0.pdf
>
>
> There are several  drawbacks in the original MOB 1.0  (Moderate Object 
> Storage) implementation, which can limit the adoption of the MOB feature:  
> # MOB compactions are executed in a Master as a chore, which limits 
> scalability because all I/O goes through a single HBase Master server. 
> # Yarn/Mapreduce framework is required to run MOB compactions in a scalable 
> way, but this won’t work in a stand-alone HBase cluster.
> # Two separate compactors for MOB and for regular store files and their 
> interactions can result in a data loss (see HBASE-22075)
> The design goals for MOB 2.0 were to provide 100% MOB 1.0 - compatible 
> implementation, which is free of the above drawbacks and can be used as a 
> drop in replacement in existing MOB deployments. So, these are design goals 
> of a MOB 2.0:
> # Make MOB compactions scalable without relying on Yarn/Mapreduce framework
> # Provide unified compactor for both MOB and regular store files
> # Make it more robust especially w.r.t. to data losses. 
> # Simplify and reduce the overall MOB code.
> # Provide 100% compatible implementation with MOB 1.0.
> # No migration of data should be required between MOB 1.0 and MOB 2.0 - just 
> software upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #884: HBASE-23347 Allowable custom authentication methods for RPCs

2019-11-27 Thread GitBox
Apache-HBase commented on issue #884: HBASE-23347 Allowable custom 
authentication methods for RPCs
URL: https://github.com/apache/hbase/pull/884#issuecomment-559312004
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 57s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  markdownlint was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
6 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 35s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 16s |  master passed  |
   | +1 :green_heart: |  compile  |   3m  1s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   2m 32s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 36s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   4m  4s |  master passed  |
   | +0 :ok: |  spotbugs  |   3m 45s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |  19m 27s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 24s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 24s |  the patch passed  |
   | -1 :x: |  checkstyle  |   2m 33s |  root: The patch generated 71 new + 49 
unchanged - 12 fixed = 120 total (was 61)  |
   | -1 :x: |  whitespace  |   0m  0s |  The patch has 9 line(s) that end in 
whitespace. Use git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply  |
   | -1 :x: |  shadedjars  |   0m 13s |  patch has 7 errors when building our 
shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 55s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | -1 :x: |  javadoc  |   0m 24s |  hbase-client generated 4 new + 2 
unchanged - 0 fixed = 6 total (was 2)  |
   | -1 :x: |  javadoc  |   2m 49s |  root generated 4 new + 3 unchanged - 0 
fixed = 7 total (was 3)  |
   | -1 :x: |  findbugs  |   4m 20s |  hbase-server generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  findbugs  |  13m 32s |  root generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m 11s |  root in the patch failed.  |
   | -1 :x: |  asflicense  |   0m 58s |  The patch generated 4 ASF License 
warnings.  |
   |  |   | 112m 20s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-server |
   |  |  Should 
org.apache.hadoop.hbase.security.provider.DigestSaslServerAuthenticationProvider$SaslDigestCallbackHandler
 be a _static_ inner class?  At 
DigestSaslServerAuthenticationProvider.java:inner class?  At 
DigestSaslServerAuthenticationProvider.java:[lines 66-119] |
   | FindBugs | module:root |
   |  |  Should 
org.apache.hadoop.hbase.security.provider.DigestSaslServerAuthenticationProvider$SaslDigestCallbackHandler
 be a _static_ inner class?  At 
DigestSaslServerAuthenticationProvider.java:inner class?  At 
DigestSaslServerAuthenticationProvider.java:[lines 66-119] |
   | Failed junit tests | 
hadoop.hbase.security.provider.TestSaslClientAuthenticationProviders |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-884/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/884 |
   | Optional Tests | dupname asflicense markdownlint javac javadoc unit 
spotbugs findbugs shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux a4005708683f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-884/out/precommit/personality/provided.sh
 |
   | git revision | master / 636fa2c6b0 |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-884/1/artifact/out/diff-checkstyle-root.txt
 |
   | whitespace | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-884/1/artifact/out/whitespace-eol.txt
 |
   | shadedjars | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-884/1/artifact/out/patch-shadedjars.txt
 |
   | javadoc | 
https://builds.apac

[jira] [Updated] (HBASE-22749) Distributed MOB compactions

2019-11-27 Thread Vladimir Rodionov (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-22749:
--
Status: Open  (was: Patch Available)

> Distributed MOB compactions 
> 
>
> Key: HBASE-22749
> URL: https://issues.apache.org/jira/browse/HBASE-22749
> Project: HBase
>  Issue Type: New Feature
>  Components: mob
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-22749-branch-2.2-v4.patch, 
> HBASE-22749-master-v1.patch, HBASE-22749-master-v2.patch, 
> HBASE-22749-master-v3.patch, HBASE-22749-master-v4.patch, 
> HBase-MOB-2.0-v3.0.pdf
>
>
> There are several  drawbacks in the original MOB 1.0  (Moderate Object 
> Storage) implementation, which can limit the adoption of the MOB feature:  
> # MOB compactions are executed in a Master as a chore, which limits 
> scalability because all I/O goes through a single HBase Master server. 
> # Yarn/Mapreduce framework is required to run MOB compactions in a scalable 
> way, but this won’t work in a stand-alone HBase cluster.
> # Two separate compactors for MOB and for regular store files and their 
> interactions can result in a data loss (see HBASE-22075)
> The design goals for MOB 2.0 were to provide 100% MOB 1.0 - compatible 
> implementation, which is free of the above drawbacks and can be used as a 
> drop in replacement in existing MOB deployments. So, these are design goals 
> of a MOB 2.0:
> # Make MOB compactions scalable without relying on Yarn/Mapreduce framework
> # Provide unified compactor for both MOB and regular store files
> # Make it more robust especially w.r.t. to data losses. 
> # Simplify and reduce the overall MOB code.
> # Provide 100% compatible implementation with MOB 1.0.
> # No migration of data should be required between MOB 1.0 and MOB 2.0 - just 
> software upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] infraio commented on a change in pull request #832: HBASE-23298 Refactor LogRecoveredEditsOutputSink and BoundedLogWriter…

2019-11-27 Thread GitBox
infraio commented on a change in pull request #832: HBASE-23298 Refactor 
LogRecoveredEditsOutputSink and BoundedLogWriter…
URL: https://github.com/apache/hbase/pull/832#discussion_r351564158
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractRecoveredEditsOutputSink.java
 ##
 @@ -0,0 +1,283 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.wal;
+
+import static 
org.apache.hadoop.hbase.wal.WALSplitUtil.getCompletedRecoveredEditsFilePath;
+import static org.apache.hadoop.hbase.wal.WALSplitUtil.getRegionSplitEditsPath;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.log.HBaseMarkers;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.org.apache.commons.collections4.MapUtils;
+
+@InterfaceAudience.Private
+public abstract class AbstractRecoveredEditsOutputSink extends OutputSink {
+  private static final Logger LOG = 
LoggerFactory.getLogger(RecoveredEditsOutputSink.class);
+  private WALSplitter walSplitter;
+  private FileSystem walFS;
+  private Configuration conf;
+  private final ConcurrentMap regionMaximumEditLogSeqNum = new 
ConcurrentHashMap<>();
+
+  public AbstractRecoveredEditsOutputSink(WALSplitter walSplitter,
+  WALSplitter.PipelineController controller, EntryBuffers entryBuffers, 
int numWriters) {
+super(controller, entryBuffers, numWriters);
+this.walSplitter = walSplitter;
+this.walFS = walSplitter.walFS;
+this.conf = walSplitter.conf;
+  }
+
+  /**
+   * @return a writer that wraps a {@link WALProvider.Writer} and its Path. 
Caller should close.
+   */
+  protected RecoveredEditsWriter createWriter(TableName tableName, byte[] 
region, long seqId)
+  throws IOException {
+// If we already decided that this region doesn't get any output
+// we don't need to check again.
+if (blacklistedRegions.contains(region)) {
 
 Review comment:
   Let me remove blacklistedRegions stuff.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] infraio commented on a change in pull request #832: HBASE-23298 Refactor LogRecoveredEditsOutputSink and BoundedLogWriter…

2019-11-27 Thread GitBox
infraio commented on a change in pull request #832: HBASE-23298 Refactor 
LogRecoveredEditsOutputSink and BoundedLogWriter…
URL: https://github.com/apache/hbase/pull/832#discussion_r351563972
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractRecoveredEditsOutputSink.java
 ##
 @@ -0,0 +1,283 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.wal;
+
+import static 
org.apache.hadoop.hbase.wal.WALSplitUtil.getCompletedRecoveredEditsFilePath;
+import static org.apache.hadoop.hbase.wal.WALSplitUtil.getRegionSplitEditsPath;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.log.HBaseMarkers;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.org.apache.commons.collections4.MapUtils;
+
+@InterfaceAudience.Private
+public abstract class AbstractRecoveredEditsOutputSink extends OutputSink {
+  private static final Logger LOG = 
LoggerFactory.getLogger(RecoveredEditsOutputSink.class);
+  private WALSplitter walSplitter;
+  private FileSystem walFS;
+  private Configuration conf;
+  private final ConcurrentMap regionMaximumEditLogSeqNum = new 
ConcurrentHashMap<>();
+
+  public AbstractRecoveredEditsOutputSink(WALSplitter walSplitter,
+  WALSplitter.PipelineController controller, EntryBuffers entryBuffers, 
int numWriters) {
+super(controller, entryBuffers, numWriters);
+this.walSplitter = walSplitter;
+this.walFS = walSplitter.walFS;
+this.conf = walSplitter.conf;
+  }
+
+  /**
+   * @return a writer that wraps a {@link WALProvider.Writer} and its Path. 
Caller should close.
+   */
+  protected RecoveredEditsWriter createWriter(TableName tableName, byte[] 
region, long seqId)
+  throws IOException {
+// If we already decided that this region doesn't get any output
+// we don't need to check again.
+if (blacklistedRegions.contains(region)) {
+  return null;
+}
+String tmpDirName = walSplitter.conf
+.get(HConstants.TEMPORARY_FS_DIRECTORY_KEY, 
HConstants.DEFAULT_TEMPORARY_HDFS_DIRECTORY);
+Path regionEditsPath = getRegionSplitEditsPath(tableName, region, seqId,
+walSplitter.getFileBeingSplit().getPath().getName(), tmpDirName, conf);
+if (regionEditsPath == null) {
+  blacklistedRegions.add(region);
+  return null;
+}
+FileSystem walFs = FSUtils.getWALFileSystem(conf);
+if (walFs.exists(regionEditsPath)) {
+  LOG.warn("Found old edits file. It could be the " +
+  "result of a previous failed split attempt. Deleting " + 
regionEditsPath + ", length=" +
+  walFs.getFileStatus(regionEditsPath).getLen());
+  if (!walFs.delete(regionEditsPath, false)) {
+LOG.warn("Failed delete of old {}", regionEditsPath);
+  }
+}
+WALProvider.Writer w = walSplitter.createWriter(regionEditsPath);
+LOG.info("Creating recovered edits writer path={}", regionEditsPath);
+return new RecoveredEditsWriter(regionEditsPath, w, seqId);
+  }
+
+  protected Path closeWriter(String encodedRegionName, RecoveredEditsWriter 
wap,
+  List thrown) throws IOException {
+try {
+  wap.writer.close();
+} catch (IOException ioe) {
+  LOG.error("Could not close recovered edits at {}", wap.path, ioe);
+  thrown.add(ioe);
+  return null;
+}
+LOG.info("Closed recovered edits writer path={} (wrote {} edits, skipped 
{} edits in {} ms",
+wap.path, wap.editsWritten, wap.editsSkipped, wap.nanosSpent / 1000 / 
1000);
+  

[GitHub] [hbase] infraio commented on a change in pull request #832: HBASE-23298 Refactor LogRecoveredEditsOutputSink and BoundedLogWriter…

2019-11-27 Thread GitBox
infraio commented on a change in pull request #832: HBASE-23298 Refactor 
LogRecoveredEditsOutputSink and BoundedLogWriter…
URL: https://github.com/apache/hbase/pull/832#discussion_r351563266
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractRecoveredEditsOutputSink.java
 ##
 @@ -0,0 +1,283 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.wal;
+
+import static 
org.apache.hadoop.hbase.wal.WALSplitUtil.getCompletedRecoveredEditsFilePath;
+import static org.apache.hadoop.hbase.wal.WALSplitUtil.getRegionSplitEditsPath;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.log.HBaseMarkers;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.org.apache.commons.collections4.MapUtils;
+
+@InterfaceAudience.Private
+public abstract class AbstractRecoveredEditsOutputSink extends OutputSink {
+  private static final Logger LOG = 
LoggerFactory.getLogger(RecoveredEditsOutputSink.class);
+  private WALSplitter walSplitter;
+  private FileSystem walFS;
+  private Configuration conf;
+  private final ConcurrentMap regionMaximumEditLogSeqNum = new 
ConcurrentHashMap<>();
+
+  public AbstractRecoveredEditsOutputSink(WALSplitter walSplitter,
+  WALSplitter.PipelineController controller, EntryBuffers entryBuffers, 
int numWriters) {
+super(controller, entryBuffers, numWriters);
+this.walSplitter = walSplitter;
+this.walFS = walSplitter.walFS;
+this.conf = walSplitter.conf;
+  }
+
+  /**
+   * @return a writer that wraps a {@link WALProvider.Writer} and its Path. 
Caller should close.
+   */
+  protected RecoveredEditsWriter createWriter(TableName tableName, byte[] 
region, long seqId)
+  throws IOException {
+// If we already decided that this region doesn't get any output
+// we don't need to check again.
+if (blacklistedRegions.contains(region)) {
+  return null;
+}
+String tmpDirName = walSplitter.conf
+.get(HConstants.TEMPORARY_FS_DIRECTORY_KEY, 
HConstants.DEFAULT_TEMPORARY_HDFS_DIRECTORY);
+Path regionEditsPath = getRegionSplitEditsPath(tableName, region, seqId,
+walSplitter.getFileBeingSplit().getPath().getName(), tmpDirName, conf);
+if (regionEditsPath == null) {
+  blacklistedRegions.add(region);
+  return null;
+}
 
 Review comment:
   I thought it should belong here. It is a part of createRecoveredEditsWriter.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] infraio commented on a change in pull request #832: HBASE-23298 Refactor LogRecoveredEditsOutputSink and BoundedLogWriter…

2019-11-27 Thread GitBox
infraio commented on a change in pull request #832: HBASE-23298 Refactor 
LogRecoveredEditsOutputSink and BoundedLogWriter…
URL: https://github.com/apache/hbase/pull/832#discussion_r351562923
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractRecoveredEditsOutputSink.java
 ##
 @@ -0,0 +1,283 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.wal;
+
+import static 
org.apache.hadoop.hbase.wal.WALSplitUtil.getCompletedRecoveredEditsFilePath;
+import static org.apache.hadoop.hbase.wal.WALSplitUtil.getRegionSplitEditsPath;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.log.HBaseMarkers;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.FSUtils;
+import org.apache.hadoop.ipc.RemoteException;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hbase.thirdparty.org.apache.commons.collections4.MapUtils;
+
+@InterfaceAudience.Private
+public abstract class AbstractRecoveredEditsOutputSink extends OutputSink {
+  private static final Logger LOG = 
LoggerFactory.getLogger(RecoveredEditsOutputSink.class);
+  private WALSplitter walSplitter;
+  private FileSystem walFS;
+  private Configuration conf;
+  private final ConcurrentMap regionMaximumEditLogSeqNum = new 
ConcurrentHashMap<>();
+
+  public AbstractRecoveredEditsOutputSink(WALSplitter walSplitter,
+  WALSplitter.PipelineController controller, EntryBuffers entryBuffers, 
int numWriters) {
+super(controller, entryBuffers, numWriters);
+this.walSplitter = walSplitter;
+this.walFS = walSplitter.walFS;
+this.conf = walSplitter.conf;
+  }
+
+  /**
+   * @return a writer that wraps a {@link WALProvider.Writer} and its Path. 
Caller should close.
+   */
+  protected RecoveredEditsWriter createWriter(TableName tableName, byte[] 
region, long seqId)
+  throws IOException {
+// If we already decided that this region doesn't get any output
+// we don't need to check again.
+if (blacklistedRegions.contains(region)) {
 
 Review comment:
   Checked the code. regionEditsPath cannot be null now. So there will no 
regions which can be added to blacklistedRegions...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] infraio commented on a change in pull request #832: HBASE-23298 Refactor LogRecoveredEditsOutputSink and BoundedLogWriter…

2019-11-27 Thread GitBox
infraio commented on a change in pull request #832: HBASE-23298 Refactor 
LogRecoveredEditsOutputSink and BoundedLogWriter…
URL: https://github.com/apache/hbase/pull/832#discussion_r351561400
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractRecoveredEditsOutputSink.java
 ##
 @@ -0,0 +1,283 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.wal;
+
+import static 
org.apache.hadoop.hbase.wal.WALSplitUtil.getCompletedRecoveredEditsFilePath;
+import static org.apache.hadoop.hbase.wal.WALSplitUtil.getRegionSplitEditsPath;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.log.HBaseMarkers;
 
 Review comment:
   HBaseMarkers.FATAL was used in many palces. Maybe a new issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] infraio commented on a change in pull request #832: HBASE-23298 Refactor LogRecoveredEditsOutputSink and BoundedLogWriter…

2019-11-27 Thread GitBox
infraio commented on a change in pull request #832: HBASE-23298 Refactor 
LogRecoveredEditsOutputSink and BoundedLogWriter…
URL: https://github.com/apache/hbase/pull/832#discussion_r351561014
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/AbstractRecoveredEditsOutputSink.java
 ##
 @@ -0,0 +1,283 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.wal;
+
+import static 
org.apache.hadoop.hbase.wal.WALSplitUtil.getCompletedRecoveredEditsFilePath;
+import static org.apache.hadoop.hbase.wal.WALSplitUtil.getRegionSplitEditsPath;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.ConcurrentMap;
 
 Review comment:
   The Map was used in other methods. See line 251.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22749) Distributed MOB compactions

2019-11-27 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984041#comment-16984041
 ] 

HBase QA commented on HBASE-22749:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} HBASE-22749 does not apply to master. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-22749 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12987009/HBASE-22749-master-v3.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/1045/console |
| versions | git=2.17.1 |
| Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |


This message was automatically generated.



> Distributed MOB compactions 
> 
>
> Key: HBASE-22749
> URL: https://issues.apache.org/jira/browse/HBASE-22749
> Project: HBase
>  Issue Type: New Feature
>  Components: mob
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-22749-branch-2.2-v4.patch, 
> HBASE-22749-master-v1.patch, HBASE-22749-master-v2.patch, 
> HBASE-22749-master-v3.patch, HBase-MOB-2.0-v3.0.pdf
>
>
> There are several  drawbacks in the original MOB 1.0  (Moderate Object 
> Storage) implementation, which can limit the adoption of the MOB feature:  
> # MOB compactions are executed in a Master as a chore, which limits 
> scalability because all I/O goes through a single HBase Master server. 
> # Yarn/Mapreduce framework is required to run MOB compactions in a scalable 
> way, but this won’t work in a stand-alone HBase cluster.
> # Two separate compactors for MOB and for regular store files and their 
> interactions can result in a data loss (see HBASE-22075)
> The design goals for MOB 2.0 were to provide 100% MOB 1.0 - compatible 
> implementation, which is free of the above drawbacks and can be used as a 
> drop in replacement in existing MOB deployments. So, these are design goals 
> of a MOB 2.0:
> # Make MOB compactions scalable without relying on Yarn/Mapreduce framework
> # Provide unified compactor for both MOB and regular store files
> # Make it more robust especially w.r.t. to data losses. 
> # Simplify and reduce the overall MOB code.
> # Provide 100% compatible implementation with MOB 1.0.
> # No migration of data should be required between MOB 1.0 and MOB 2.0 - just 
> software upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23313) [hbck2] setRegionState should update Master in-memory state too

2019-11-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984039#comment-16984039
 ] 

Hudson commented on HBASE-23313:


Results for branch branch-2
[build #2367 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2367/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2367//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2367//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2367//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> [hbck2] setRegionState should update Master in-memory state too
> ---
>
> Key: HBASE-23313
> URL: https://issues.apache.org/jira/browse/HBASE-23313
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Affects Versions: 2.1.7, 2.2.2
>Reporter: Michael Stack
>Assignee: Wellington Chevreuil
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> setRegionState changes the hbase:meta table info:state column. It does not 
> alter the Master's in-memory state. This means you have to kill Master and 
> have another assume Active Master role of a state-change to be noticed. 
> Better if the setRegionState just went via Master and updated Master and 
> hbase:meta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-22749) Distributed MOB compactions

2019-11-27 Thread Vladimir Rodionov (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-22749:
--
Status: Patch Available  (was: Open)

> Distributed MOB compactions 
> 
>
> Key: HBASE-22749
> URL: https://issues.apache.org/jira/browse/HBASE-22749
> Project: HBase
>  Issue Type: New Feature
>  Components: mob
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-22749-branch-2.2-v4.patch, 
> HBASE-22749-master-v1.patch, HBASE-22749-master-v2.patch, 
> HBASE-22749-master-v3.patch, HBase-MOB-2.0-v3.0.pdf
>
>
> There are several  drawbacks in the original MOB 1.0  (Moderate Object 
> Storage) implementation, which can limit the adoption of the MOB feature:  
> # MOB compactions are executed in a Master as a chore, which limits 
> scalability because all I/O goes through a single HBase Master server. 
> # Yarn/Mapreduce framework is required to run MOB compactions in a scalable 
> way, but this won’t work in a stand-alone HBase cluster.
> # Two separate compactors for MOB and for regular store files and their 
> interactions can result in a data loss (see HBASE-22075)
> The design goals for MOB 2.0 were to provide 100% MOB 1.0 - compatible 
> implementation, which is free of the above drawbacks and can be used as a 
> drop in replacement in existing MOB deployments. So, these are design goals 
> of a MOB 2.0:
> # Make MOB compactions scalable without relying on Yarn/Mapreduce framework
> # Provide unified compactor for both MOB and regular store files
> # Make it more robust especially w.r.t. to data losses. 
> # Simplify and reduce the overall MOB code.
> # Provide 100% compatible implementation with MOB 1.0.
> # No migration of data should be required between MOB 1.0 and MOB 2.0 - just 
> software upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-22749) Distributed MOB compactions

2019-11-27 Thread Vladimir Rodionov (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-22749:
--
Attachment: HBASE-22749-master-v3.patch

> Distributed MOB compactions 
> 
>
> Key: HBASE-22749
> URL: https://issues.apache.org/jira/browse/HBASE-22749
> Project: HBase
>  Issue Type: New Feature
>  Components: mob
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-22749-branch-2.2-v4.patch, 
> HBASE-22749-master-v1.patch, HBASE-22749-master-v2.patch, 
> HBASE-22749-master-v3.patch, HBase-MOB-2.0-v3.0.pdf
>
>
> There are several  drawbacks in the original MOB 1.0  (Moderate Object 
> Storage) implementation, which can limit the adoption of the MOB feature:  
> # MOB compactions are executed in a Master as a chore, which limits 
> scalability because all I/O goes through a single HBase Master server. 
> # Yarn/Mapreduce framework is required to run MOB compactions in a scalable 
> way, but this won’t work in a stand-alone HBase cluster.
> # Two separate compactors for MOB and for regular store files and their 
> interactions can result in a data loss (see HBASE-22075)
> The design goals for MOB 2.0 were to provide 100% MOB 1.0 - compatible 
> implementation, which is free of the above drawbacks and can be used as a 
> drop in replacement in existing MOB deployments. So, these are design goals 
> of a MOB 2.0:
> # Make MOB compactions scalable without relying on Yarn/Mapreduce framework
> # Provide unified compactor for both MOB and regular store files
> # Make it more robust especially w.r.t. to data losses. 
> # Simplify and reduce the overall MOB code.
> # Provide 100% compatible implementation with MOB 1.0.
> # No migration of data should be required between MOB 1.0 and MOB 2.0 - just 
> software upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-22749) Distributed MOB compactions

2019-11-27 Thread Vladimir Rodionov (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-22749:
--
Status: Open  (was: Patch Available)

> Distributed MOB compactions 
> 
>
> Key: HBASE-22749
> URL: https://issues.apache.org/jira/browse/HBASE-22749
> Project: HBase
>  Issue Type: New Feature
>  Components: mob
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-22749-branch-2.2-v4.patch, 
> HBASE-22749-master-v1.patch, HBASE-22749-master-v2.patch, 
> HBase-MOB-2.0-v3.0.pdf
>
>
> There are several  drawbacks in the original MOB 1.0  (Moderate Object 
> Storage) implementation, which can limit the adoption of the MOB feature:  
> # MOB compactions are executed in a Master as a chore, which limits 
> scalability because all I/O goes through a single HBase Master server. 
> # Yarn/Mapreduce framework is required to run MOB compactions in a scalable 
> way, but this won’t work in a stand-alone HBase cluster.
> # Two separate compactors for MOB and for regular store files and their 
> interactions can result in a data loss (see HBASE-22075)
> The design goals for MOB 2.0 were to provide 100% MOB 1.0 - compatible 
> implementation, which is free of the above drawbacks and can be used as a 
> drop in replacement in existing MOB deployments. So, these are design goals 
> of a MOB 2.0:
> # Make MOB compactions scalable without relying on Yarn/Mapreduce framework
> # Provide unified compactor for both MOB and regular store files
> # Make it more robust especially w.r.t. to data losses. 
> # Simplify and reduce the overall MOB code.
> # Provide 100% compatible implementation with MOB 1.0.
> # No migration of data should be required between MOB 1.0 and MOB 2.0 - just 
> software upgrade.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23347) Pluggable RPC authentication

2019-11-27 Thread Josh Elser (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16984038#comment-16984038
 ] 

Josh Elser commented on HBASE-23347:


Copying from the pull-request:

{quote}
This is a big one. Sorry ;)

Start here with a one-page writeup: 
[https://github.com/joshelser/hbase/blob/23347-pluggable-authentication/PluggableRpcAuthentication.md]

Next, look at the client side interfaces: 
[https://github.com/joshelser/hbase/tree/23347-pluggable-authentication/hbase-client/src/main/java/org/apache/hadoop/hbase/security/provider]

Then, the server side interfaces: 
[https://github.com/joshelser/hbase/tree/23347-pluggable-authentication/hbase-server/src/main/java/org/apache/hadoop/hbase/security/provider]

Finally, an end-to-end example of how you can use this: 
[https://github.com/joshelser/hbase/blob/23347-pluggable-authentication/hbase-server/src/test/java/org/apache/hadoop/hbase/security/provider/TestCustomSaslAuthenticationProvider.java]

This is very-much a first draft. This is "new art" for HBase and I expect some 
discussion on how we want to safely do this. I'm fully expecting lots of 
revisions.

Relevant tests should be passing, but this is also partially for me to let 
QA-bot run. Everything I did was in attempts to retain backwards compatibility 
with older clients, as well as RPC-impl compatibility. This should all be 
implementation-details inside of HBase.
{quote}

> Pluggable RPC authentication
> 
>
> Key: HBASE-23347
> URL: https://issues.apache.org/jira/browse/HBASE-23347
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc, security
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0
>
>
> Today in HBase, we rely on SASL to implement Kerberos and delegation token 
> authentication. The RPC client and server logic is very tightly coupled to 
> our three authentication mechanism (the previously two mentioned plus simple 
> auth'n) for no good reason (other than "that's how it was built", best as I 
> can tell).
> SASL's function is to decouple the "application" from how a request is being 
> authenticated, which means that, to support a variety of other authentication 
> approaches, we just need to be a little more flexible in letting developers 
> create their own authentication mechanism for HBase.
> This is less for the "average joe" user to write their own authentication 
> plugin (eek), but more to allow us HBase developers to start iterating, see 
> what is possible.
> I'll attach a full write-up on what I have today as to how I think we can add 
> these abstractions, as well as an initial implementation of this idea, with a 
> unit test that shows an end-to-end authentication solution against HBase.
> cc/ [~wchevreuil] as he's been working with me behind the scenes, giving lots 
> of great feedback and support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] joshelser opened a new pull request #884: HBASE-23347 Allowable custom authentication methods for RPCs

2019-11-27 Thread GitBox
joshelser opened a new pull request #884: HBASE-23347 Allowable custom 
authentication methods for RPCs
URL: https://github.com/apache/hbase/pull/884
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (HBASE-23347) Pluggable RPC authentication

2019-11-27 Thread Josh Elser (Jira)
Josh Elser created HBASE-23347:
--

 Summary: Pluggable RPC authentication
 Key: HBASE-23347
 URL: https://issues.apache.org/jira/browse/HBASE-23347
 Project: HBase
  Issue Type: Improvement
  Components: rpc, security
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 3.0.0


Today in HBase, we rely on SASL to implement Kerberos and delegation token 
authentication. The RPC client and server logic is very tightly coupled to our 
three authentication mechanism (the previously two mentioned plus simple 
auth'n) for no good reason (other than "that's how it was built", best as I can 
tell).

SASL's function is to decouple the "application" from how a request is being 
authenticated, which means that, to support a variety of other authentication 
approaches, we just need to be a little more flexible in letting developers 
create their own authentication mechanism for HBase.

This is less for the "average joe" user to write their own authentication 
plugin (eek), but more to allow us HBase developers to start iterating, see 
what is possible.

I'll attach a full write-up on what I have today as to how I think we can add 
these abstractions, as well as an initial implementation of this idea, with a 
unit test that shows an end-to-end authentication solution against HBase.

cc/ [~wchevreuil] as he's been working with me behind the scenes, giving lots 
of great feedback and support.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] bharathv commented on a change in pull request #872: HBASE-23333 Provide call context around timeouts and other failure scenarios

2019-11-27 Thread GitBox
bharathv commented on a change in pull request #872: HBASE-2 Provide call 
context around timeouts and other failure scenarios
URL: https://github.com/apache/hbase/pull/872#discussion_r351426984
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
 ##
 @@ -194,7 +194,7 @@ public void run() {
 // exception here means the call has not been added to the 
pendingCalls yet, so we need
 // to fail it by our own.
 if (LOG.isDebugEnabled()) {
-  LOG.debug("call write error for call #" + call.id, e);
+  LOG.debug("call write error for " + call, e);
 
 Review comment:
   nit: use parameterized logging and skip the if check?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #872: HBASE-23333 Provide call context around timeouts and other failure scenarios

2019-11-27 Thread GitBox
bharathv commented on a change in pull request #872: HBASE-2 Provide call 
context around timeouts and other failure scenarios
URL: https://github.com/apache/hbase/pull/872#discussion_r351427841
 
 

 ##
 File path: hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/Call.java
 ##
 @@ -78,8 +78,13 @@ protected Call(int id, final Descriptors.MethodDescriptor 
md, Message param,
 
   @Override
   public String toString() {
-return "callId: " + this.id + " methodName: " + this.md.getName() + " 
param {"
-+ (this.param != null ? ProtobufUtil.getShortTextFormat(this.param) : 
"") + "}";
+return new ToStringBuilder(this, ToStringStyle.SHORT_PREFIX_STYLE)
 
 Review comment:
   nice..


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #872: HBASE-23333 Provide call context around timeouts and other failure scenarios

2019-11-27 Thread GitBox
bharathv commented on a change in pull request #872: HBASE-2 Provide call 
context around timeouts and other failure scenarios
URL: https://github.com/apache/hbase/pull/872#discussion_r351427070
 
 

 ##
 File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/BlockingRpcConnection.java
 ##
 @@ -628,7 +628,7 @@ private void writeRequest(Call call) throws IOException {
 call.callStats.setRequestSizeBytes(write(this.out, requestHeader, 
call.param, cellBlock));
   } catch (Throwable t) {
 if(LOG.isTraceEnabled()) {
-  LOG.trace("Error while writing call, call_id:" + call.id, t);
+  LOG.trace("Error while writing " + call, t);
 
 Review comment:
   same.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #880: HBASE-18382 add transport type info into Thrift UI

2019-11-27 Thread GitBox
bharathv commented on a change in pull request #880: HBASE-18382 add transport 
type info into Thrift UI
URL: https://github.com/apache/hbase/pull/880#discussion_r351421796
 
 

 ##
 File path: hbase-thrift/src/main/resources/hbase-webapps/thrift/thrift.jsp
 ##
 @@ -31,9 +31,12 @@ long startcode = conf.getLong("startcode", 
System.currentTimeMillis());
 String listenPort = conf.get("hbase.regionserver.thrift.port", "9090");
 String serverInfo = listenPort + "," + String.valueOf(startcode);
 ImplType implType = ImplType.getServerImpl(conf);
-String framed = implType.isAlwaysFramed()
-? "true" : conf.get("hbase.regionserver.thrift.framed", "false");
-String compact = conf.get("hbase.regionserver.thrift.compact", "false");
+String framed = (implType.isAlwaysFramed()
+? "true" : conf.get("hbase.regionserver.thrift.framed", "false"))
+.equals("true") ? "Framed": "Standard";
+String compact = conf.get("hbase.regionserver.thrift.compact", "false")
 
 Review comment:
   Also switch to conf.getBoolean() and avoid equals..?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #880: HBASE-18382 add transport type info into Thrift UI

2019-11-27 Thread GitBox
bharathv commented on a change in pull request #880: HBASE-18382 add transport 
type info into Thrift UI
URL: https://github.com/apache/hbase/pull/880#discussion_r351422319
 
 

 ##
 File path: hbase-thrift/src/main/resources/hbase-webapps/thrift/thrift.jsp
 ##
 @@ -112,14 +115,19 @@ String compact = 
conf.get("hbase.regionserver.thrift.compact", "false");
 Thrift RPC engine implementation type chosen by this Thrift 
server
 
 
-Compact Protocol
+Protocol
 <%= compact %>
-Thrift RPC engine uses compact protocol
+Thrift RPC engine protocol type
 
 
-Framed Transport
+Transport
 <%= framed %>
-Thrift RPC engine uses framed transport
+Thrift RPC engine transport type
+
+
+Quality Of Protection
+<%= qop %>
+QOP settings for SASL 
 
 Review comment:
   nit: extra space after SASL.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #880: HBASE-18382 add transport type info into Thrift UI

2019-11-27 Thread GitBox
bharathv commented on a change in pull request #880: HBASE-18382 add transport 
type info into Thrift UI
URL: https://github.com/apache/hbase/pull/880#discussion_r351414777
 
 

 ##
 File path: hbase-thrift/src/main/resources/hbase-webapps/thrift/thrift.jsp
 ##
 @@ -31,9 +31,12 @@ long startcode = conf.getLong("startcode", 
System.currentTimeMillis());
 String listenPort = conf.get("hbase.regionserver.thrift.port", "9090");
 String serverInfo = listenPort + "," + String.valueOf(startcode);
 ImplType implType = ImplType.getServerImpl(conf);
-String framed = implType.isAlwaysFramed()
-? "true" : conf.get("hbase.regionserver.thrift.framed", "false");
-String compact = conf.get("hbase.regionserver.thrift.compact", "false");
+String framed = (implType.isAlwaysFramed()
+? "true" : conf.get("hbase.regionserver.thrift.framed", "false"))
+.equals("true") ? "Framed": "Standard";
+String compact = conf.get("hbase.regionserver.thrift.compact", "false")
 
 Review comment:
   nit: String protocol =...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #880: HBASE-18382 add transport type info into Thrift UI

2019-11-27 Thread GitBox
bharathv commented on a change in pull request #880: HBASE-18382 add transport 
type info into Thrift UI
URL: https://github.com/apache/hbase/pull/880#discussion_r351414082
 
 

 ##
 File path: hbase-thrift/src/main/resources/hbase-webapps/thrift/thrift.jsp
 ##
 @@ -31,9 +31,12 @@ long startcode = conf.getLong("startcode", 
System.currentTimeMillis());
 String listenPort = conf.get("hbase.regionserver.thrift.port", "9090");
 String serverInfo = listenPort + "," + String.valueOf(startcode);
 ImplType implType = ImplType.getServerImpl(conf);
-String framed = implType.isAlwaysFramed()
-? "true" : conf.get("hbase.regionserver.thrift.framed", "false");
-String compact = conf.get("hbase.regionserver.thrift.compact", "false");
+String framed = (implType.isAlwaysFramed()
+? "true" : conf.get("hbase.regionserver.thrift.framed", "false"))
 
 Review comment:
   "true" -> "Framed"?
   
   nit: I think the logic has become nested and is unreadable for a ternary 
operator. Mind switching to an if-else?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ramkrish86 commented on a change in pull request #868: HBASE-23296 Add CompositeBucketCache to support tiered BC

2019-11-27 Thread GitBox
ramkrish86 commented on a change in pull request #868: HBASE-23296 Add 
CompositeBucketCache to support tiered BC
URL: https://github.com/apache/hbase/pull/868#discussion_r351415304
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
 ##
 @@ -200,15 +209,39 @@ private static BlockCache 
createExternalBlockcache(Configuration c) {
 
   }
 
-  private static BucketCache createBucketCache(Configuration c) {
-// Check for L2.  ioengine name must be non-null.
-String bucketCacheIOEngineName = c.get(BUCKET_CACHE_IOENGINE_KEY, null);
+  private static BucketCache createBucketCache(Configuration c, CacheLevel 
level) {
+// Check for ioengine name must be non-null.
+String bucketCacheIOEngineName;
+int writerThreads;
+int writerQueueLen;
+String persistentPath;
+switch(level) {
+  case L1:
+bucketCacheIOEngineName = c.get(CompositeBucketCache.IOENGINE_L1, 
null);
+writerThreads = c.getInt(CompositeBucketCache.WRITER_THREADS_L1,
+DEFAULT_BUCKET_CACHE_WRITER_THREADS);
+writerQueueLen = c.getInt(CompositeBucketCache.WRITER_QUEUE_LENGTH_L1,
+DEFAULT_BUCKET_CACHE_WRITER_QUEUE);
+persistentPath = c.get(CompositeBucketCache.PERSISTENT_PATH_L1);
+break;
+  case L2:
+  default:
 
 Review comment:
   We need to LOG that we are creating both L1 and L2 caches as bucket cache.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ramkrish86 commented on a change in pull request #868: HBASE-23296 Add CompositeBucketCache to support tiered BC

2019-11-27 Thread GitBox
ramkrish86 commented on a change in pull request #868: HBASE-23296 Add 
CompositeBucketCache to support tiered BC
URL: https://github.com/apache/hbase/pull/868#discussion_r351414048
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
 ##
 @@ -38,345 +32,15 @@
  * Metrics are the combined size and hits and misses of both caches.
  */
 @InterfaceAudience.Private
-public class CombinedBlockCache implements ResizableBlockCache, HeapSize {
-  protected final FirstLevelBlockCache l1Cache;
-  protected final BlockCache l2Cache;
-  protected final CombinedCacheStats combinedCacheStats;
+public class CombinedBlockCache extends CompositeBlockCache implements 
ResizableBlockCache {
 
 Review comment:
   So compositeBlockCache is the base now. CombinedBlockCache is a type of it. 
good.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bharathv commented on a change in pull request #883: HBASE-22920 github pr testing job should use dev-support script

2019-11-27 Thread GitBox
bharathv commented on a change in pull request #883: HBASE-22920 github pr 
testing job should use dev-support script
URL: https://github.com/apache/hbase/pull/883#discussion_r351410828
 
 

 ##
 File path: dev-support/Jenkinsfile_GitHub
 ##
 @@ -85,17 +85,7 @@ pipeline {
 mvn --offline --version  || true
 echo "getting machine specs, find in 
${BUILD_URL}/artifact/patchprocess/machine/"
 
 Review comment:
   This path is incorrect, mind removing this line? It is logged inside the 
script [1]. Sample output [2]
   
   [1] 
https://github.com/apache/hbase/blob/636fa2c6b02d67427c2c33f71b8e9977e064debb/dev-support/gather_machine_environment.sh#L41
   [2] 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-883/1/console


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-connectors] iadamcsik-cldr commented on issue #47: HBASE-23295 HBaseContext should use most recent delegation token

2019-11-27 Thread GitBox
iadamcsik-cldr commented on issue #47: HBASE-23295 HBaseContext should use most 
recent delegation token
URL: https://github.com/apache/hbase-connectors/pull/47#issuecomment-559144692
 
 
   @joshelser @meszibalu Could one of you take a look at this change?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #832: HBASE-23298 Refactor LogRecoveredEditsOutputSink and BoundedLogWriter…

2019-11-27 Thread GitBox
Apache-HBase commented on issue #832: HBASE-23298 Refactor 
LogRecoveredEditsOutputSink and BoundedLogWriter…
URL: https://github.com/apache/hbase/pull/832#issuecomment-559125739
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 54s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 42s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 22s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 36s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m 16s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 13s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 56s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 38s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 51s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   4m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 163m 47s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 222m 58s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-832/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/832 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 318d19ca052c 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-832/out/precommit/personality/provided.sh
 |
   | git revision | master / 636fa2c6b0 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-832/7/testReport/
 |
   | Max. process+thread count | 4725 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-832/7/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] ramkrish86 commented on a change in pull request #868: HBASE-23296 Add CompositeBucketCache to support tiered BC

2019-11-27 Thread GitBox
ramkrish86 commented on a change in pull request #868: HBASE-23296 Add 
CompositeBucketCache to support tiered BC
URL: https://github.com/apache/hbase/pull/868#discussion_r351330635
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCacheFactory.java
 ##
 @@ -110,29 +113,35 @@ public static BlockCache createBlockCache(Configuration 
conf) {
   + "we will remove the deprecated config.", 
DEPRECATED_BLOCKCACHE_BLOCKSIZE_KEY,
 BLOCKCACHE_BLOCKSIZE_KEY);
 }
-FirstLevelBlockCache l1Cache = createFirstLevelCache(conf);
+BlockCache l1Cache = createFirstLevelCache(conf);
 if (l1Cache == null) {
   return null;
 }
-boolean useExternal = conf.getBoolean(EXTERNAL_BLOCKCACHE_KEY, 
EXTERNAL_BLOCKCACHE_DEFAULT);
-if (useExternal) {
-  BlockCache l2CacheInstance = createExternalBlockcache(conf);
-  return l2CacheInstance == null ?
-  l1Cache :
-  new InclusiveCombinedBlockCache(l1Cache, l2CacheInstance);
+if (conf.getBoolean(EXTERNAL_BLOCKCACHE_KEY, EXTERNAL_BLOCKCACHE_DEFAULT)) 
{
+  BlockCache l2Cache = createExternalBlockcache(conf);
+  return l2Cache == null ? l1Cache : new InclusiveCombinedBlockCache(
+  (FirstLevelBlockCache)l1Cache, l2Cache);
 } else {
   // otherwise use the bucket cache.
-  BucketCache bucketCache = createBucketCache(conf);
-  if (!conf.getBoolean("hbase.bucketcache.combinedcache.enabled", true)) {
-// Non combined mode is off from 2.0
-LOG.warn(
-"From HBase 2.0 onwards only combined mode of LRU cache and bucket 
cache is available");
+  BucketCache l2Cache = createBucketCache(conf, CacheLevel.L2);
+  if (conf.getBoolean(BUCKET_CACHE_COMPOSITE_KEY, false)) {
+return l2Cache == null ? l1Cache : new 
CompositeBucketCache((BucketCache)l1Cache, l2Cache);
+  } else {
+if (!conf.getBoolean("hbase.bucketcache.combinedcache.enabled", true)) 
{
+  // Non combined mode is off from 2.0
+  LOG.warn("From HBase 2.0 onwards only combined mode of LRU cache and 
bucket"
+  + " cache is available");
+}
+return l2Cache == null ? l1Cache : new CombinedBlockCache(
+(FirstLevelBlockCache)l1Cache, l2Cache);
   }
-  return bucketCache == null ? l1Cache : new CombinedBlockCache(l1Cache, 
bucketCache);
 }
   }
 
-  private static FirstLevelBlockCache createFirstLevelCache(final 
Configuration c) {
+  private static BlockCache createFirstLevelCache(final Configuration c) {
+if (c.getBoolean(BUCKET_CACHE_COMPOSITE_KEY, false)) {
+  return createBucketCache(c, CacheLevel.L1);
 
 Review comment:
   Yes. I saw it now. The bucket cache's hash map was considered to take 
significant space and was optimized by some JIRA by @anoopsjohn . So now with 
this tiered cache it may be having some more impact. But all those for later 
just saying. Will take a closer look at the patch. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-23336) [CLI] Incorrect row(s) count "clear_deadservers"

2019-11-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983568#comment-16983568
 ] 

Hudson commented on HBASE-23336:


Results for branch master
[build #1549 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1549/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> [CLI] Incorrect row(s) count  "clear_deadservers"
> -
>
> Key: HBASE-23336
> URL: https://issues.apache.org/jira/browse/HBASE-23336
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 3.0.0, 2.1.0, 2.2.0
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3, 2.1.9
>
>
> [HBASE-15849|https://issues.apache.org/jira/browse/HBASE-15849] simplified 
> the format of command total runtime but clear_deadservers caller has not 
> modified so it prints current timestamp instead of no of rows. 
>  
> {code:java}
> hbase(main):015:0>  clear_deadservers 
> 'kpalanisamy-apache302.openstacklocal,16020'
> SERVERNAME
> kpalanisamy-apache302.openstacklocal,16020,16020
> 1574585488 row(s)
> Took 0.0145 seconds
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-20395) Displaying thrift server type on the thrift page

2019-11-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-20395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983569#comment-16983569
 ] 

Hudson commented on HBASE-20395:


Results for branch master
[build #1549 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1549/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Displaying thrift server type on the thrift page
> 
>
> Key: HBASE-20395
> URL: https://issues.apache.org/jira/browse/HBASE-20395
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Guangxu Cheng
>Assignee: Guangxu Cheng
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-20395.master.001.patch, 
> HBASE-20395.master.002.patch, HBASE-20395.master.003.patch, 
> HBASE-20395.master.004.patch, HBASE-20395.master.005.patch, 
> HBASE-20395.master.005.patch, result.png
>
>
> HBase supports two types of thrift server: thrift and thrift2.
> But after start the thrift server successfully, we can not get the thrift 
> server type conveniently. 
> So, displaying thrift server type on the thrift page may provide some 
> convenience for the users.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23117) Bad enum in hbase:meta info:state column can fail loadMeta and stop startup

2019-11-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983571#comment-16983571
 ] 

Hudson commented on HBASE-23117:


Results for branch master
[build #1549 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1549/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Bad enum in hbase:meta info:state column can fail loadMeta and stop startup
> ---
>
> Key: HBASE-23117
> URL: https://issues.apache.org/jira/browse/HBASE-23117
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.2
>Reporter: Michael Stack
>Assignee: Sandeep Pal
>Priority: Minor
> Fix For: 3.0.0, 2.3.0, 2.2.3
>
>
> Had a bad value in info:state field in meta and it made it so couldn't start 
> up the cluster; loadMeta would not succeed. If a bad state, should note it, 
> compensate, and move on.
> The bad entry was an own goal that happened while trying to fix other issues 
> in a pre-hbck2 cluster.
> Here was the exception:
> {code}
> java.lang.IllegalArgumentException: No enum constant 
> org.apache.hadoop.hbase.master.RegionState.State.1
>   at java.lang.Enum.valueOf(Enum.java:238)
>   at 
> org.apache.hadoop.hbase.master.RegionState$State.valueOf(RegionState.java:37)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStateStore.getRegionState(RegionStateStore.java:338)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStateStore.visitMetaEntry(RegionStateStore.java:116)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStateStore.access$100(RegionStateStore.java:59)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStateStore$1.visit(RegionStateStore.java:87)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:769)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:734)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.scanMeta(MetaTableAccessor.java:690)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.fullScanRegions(MetaTableAccessor.java:220)
>   at 
> org.apache.hadoop.hbase.master.assignment.RegionStateStore.visitMeta(RegionStateStore.java:77)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.loadMeta(AssignmentManager.java:1248)
>   at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.joinCluster(AssignmentManager.java:1209)
>   at 
> org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:998)
>   at 
> org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2260)
>   at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:583)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23313) [hbck2] setRegionState should update Master in-memory state too

2019-11-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983572#comment-16983572
 ] 

Hudson commented on HBASE-23313:


Results for branch master
[build #1549 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1549/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> [hbck2] setRegionState should update Master in-memory state too
> ---
>
> Key: HBASE-23313
> URL: https://issues.apache.org/jira/browse/HBASE-23313
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Affects Versions: 2.1.7, 2.2.2
>Reporter: Michael Stack
>Assignee: Wellington Chevreuil
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> setRegionState changes the hbase:meta table info:state column. It does not 
> alter the Master's in-memory state. This means you have to kill Master and 
> have another assume Active Master role of a state-change to be noticed. 
> Better if the setRegionState just went via Master and updated Master and 
> hbase:meta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-23293) [REPLICATION] make ship edits timeout configurable

2019-11-27 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-23293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983567#comment-16983567
 ] 

Hudson commented on HBASE-23293:


Results for branch master
[build #1549 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1549/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1549//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> [REPLICATION] make ship edits timeout configurable
> --
>
> Key: HBASE-23293
> URL: https://issues.apache.org/jira/browse/HBASE-23293
> Project: HBase
>  Issue Type: Improvement
>  Components: Replication
>Reporter: chenxu
>Assignee: chenxu
>Priority: Minor
>
> ReplicationSourceShipper#shipEdits may take a while if bulkload replication 
> enabled, since we should copy HFile from the source cluster, so i think there 
> is a need to make the timeout value configurable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-23313) [hbck2] setRegionState should update Master in-memory state too

2019-11-27 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil resolved HBASE-23313.
--
Resolution: Fixed

Pushed to master an branch-2 branches. There were few RPC changes, only new 
methods and attributes added.

> [hbck2] setRegionState should update Master in-memory state too
> ---
>
> Key: HBASE-23313
> URL: https://issues.apache.org/jira/browse/HBASE-23313
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Affects Versions: 2.1.7, 2.2.2
>Reporter: Michael Stack
>Assignee: Wellington Chevreuil
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> setRegionState changes the hbase:meta table info:state column. It does not 
> alter the Master's in-memory state. This means you have to kill Master and 
> have another assume Active Master role of a state-change to be noticed. 
> Better if the setRegionState just went via Master and updated Master and 
> hbase:meta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-23313) [hbck2] setRegionState should update Master in-memory state too

2019-11-27 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-23313:
-
Affects Version/s: 2.1.7
   2.2.2

> [hbck2] setRegionState should update Master in-memory state too
> ---
>
> Key: HBASE-23313
> URL: https://issues.apache.org/jira/browse/HBASE-23313
> Project: HBase
>  Issue Type: Bug
>  Components: hbck2
>Affects Versions: 2.1.7, 2.2.2
>Reporter: Michael Stack
>Assignee: Wellington Chevreuil
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
>
> setRegionState changes the hbase:meta table info:state column. It does not 
> alter the Master's in-memory state. This means you have to kill Master and 
> have another assume Active Master role of a state-change to be noticed. 
> Better if the setRegionState just went via Master and updated Master and 
> hbase:meta.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] wchevreuil commented on a change in pull request #856: HBASE-21776: Avoid duplicate calls to setStoragePolicy which shows up in debug Logging

2019-11-27 Thread GitBox
wchevreuil commented on a change in pull request #856: HBASE-21776: Avoid 
duplicate calls to setStoragePolicy which shows up in debug Logging
URL: https://github.com/apache/hbase/pull/856#discussion_r351271834
 
 

 ##
 File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java
 ##
 @@ -189,8 +189,14 @@ Path createStoreDir(final String familyName) throws 
IOException {
* @param familyName The name of column family.
* @param policyName The name of the storage policy
*/
-  public void setStoragePolicy(String familyName, String policyName) {
-FSUtils.setStoragePolicy(this.fs, getStoreDir(familyName), policyName);
+  public void setStoragePolicy(String familyName, String policyName) throws 
IOException {
+if (this.fs instanceof HFileSystem) {
 
 Review comment:
   > it's not just the duplicate traces but to avoid the duplicate useless 
function calls.
   
   Well, it's a tradeoff indeed. I would still favour code isolation from 
specific impl logic, over the single additional call. Here we are making the 
code a bit tighter, with one more place to be changed in the event of a change 
at _HFileSystem_, for example.  If we agree to get this merged though, can we 
at least make sure we have tests for the two _if_ conditions now?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (HBASE-23345) Table need to replication unless all of cfs are excluded

2019-11-27 Thread Sun Xin (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-23345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sun Xin reassigned HBASE-23345:
---

Assignee: Sun Xin

> Table need to replication unless all of cfs are excluded
> 
>
> Key: HBASE-23345
> URL: https://issues.apache.org/jira/browse/HBASE-23345
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Reporter: Sun Xin
>Assignee: Sun Xin
>Priority: Major
>
> ReplicationPeerConfig.needToReplicate return false, when 
> replicateAllUserTables is true and excludeTableCFsMap contains part of cfs.
> Should judge by whether all of cfs are excluded.
> {code:java}
> public boolean needToReplicate(TableName table) {
>   if (replicateAllUserTables) {
> if (excludeNamespaces != null && 
> excludeNamespaces.contains(table.getNamespaceAsString())) {
>   return false;
> }
> if (excludeTableCFsMap != null && excludeTableCFsMap.containsKey(table)) {
>   return false;
> }
> return true;
>   } else {
> if (namespaces != null && 
> namespaces.contains(table.getNamespaceAsString())) {
>   return true;
> }
> if (tableCFsMap != null && tableCFsMap.containsKey(table)) {
>   return true;
> }
> return false;
>   }
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #882: HBASE-23293 [REPLICATION] make ship edits timeout configurable

2019-11-27 Thread GitBox
Apache-HBase commented on issue #882: HBASE-23293 [REPLICATION] make ship edits 
timeout configurable
URL: https://github.com/apache/hbase/pull/882#issuecomment-559010821
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 49s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
3 new or modified test files.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m 48s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m 38s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 11s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   4m 36s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  branch-2 passed  |
   | +0 :ok: |  spotbugs  |   3m 36s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 49s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m 19s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 34s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m  5s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 36s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  16m 48s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   5m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 50s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 38s |  hbase-replication in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 234m 39s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   1m 10s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 306m  6s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-882/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/882 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux ad72226f79c6 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-882/out/precommit/personality/provided.sh
 |
   | git revision | branch-2 / ab09e74055 |
   | Default Java | 1.8.0_181 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-882/1/testReport/
 |
   | Max. process+thread count | 4366 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-replication hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-882/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] wchevreuil commented on issue #749: HBASE-23205 Correctly update the position of WALs currently being replicated

2019-11-27 Thread GitBox
wchevreuil commented on issue #749: HBASE-23205 Correctly update the position 
of WALs currently being replicated
URL: https://github.com/apache/hbase/pull/749#issuecomment-559008333
 
 
   Hi @JeongDaeKim , apologies for the delay. I think the solution is good, but 
since this is changing considerably how we track log reading position, am just 
taking a conservative approach. I would like to do a bit of testing. Please 
give me until end of this week to approve it, or suggest changes. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #832: HBASE-23298 Refactor LogRecoveredEditsOutputSink and BoundedLogWriter…

2019-11-27 Thread GitBox
Apache-HBase commented on issue #832: HBASE-23298 Refactor 
LogRecoveredEditsOutputSink and BoundedLogWriter…
URL: https://github.com/apache/hbase/pull/832#issuecomment-558994778
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 27s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   4m 39s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  master passed  |
   | +0 :ok: |  spotbugs  |   4m  5s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m  3s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 52s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 57s |  the patch passed  |
   | -1 :x: |  checkstyle  |   1m 16s |  hbase-server: The patch generated 3 
new + 31 unchanged - 0 fixed = 34 total (was 31)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   4m 35s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |  15m 37s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2 or 3.1.2.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   4m  8s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 158m 31s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 34s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 215m 10s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-832/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/832 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux f709678caab3 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-832/out/precommit/personality/provided.sh
 |
   | git revision | master / 0d7a6b9725 |
   | Default Java | 1.8.0_181 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-832/6/artifact/out/diff-checkstyle-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-832/6/testReport/
 |
   | Max. process+thread count | 4527 (vs. ulimit of 1) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-832/6/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) findbugs=3.1.11 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-18382) [Thrift] Add transport type info to info server

2019-11-27 Thread Beata Sudi (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-18382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16983297#comment-16983297
 ] 

Beata Sudi commented on HBASE-18382:


Hi Lars, 

I've spent some time with this task, and I already have a pull request on 
github [see here|[https://github.com/apache/hbase/pull/880]]. Can you have a 
look at it? Thanks!

> [Thrift] Add transport type info to info server 
> 
>
> Key: HBASE-18382
> URL: https://issues.apache.org/jira/browse/HBASE-18382
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Lars George
>Priority: Minor
>  Labels: beginner
>
> It would be really helpful to know if the Thrift server was started using the 
> HTTP or binary transport. Any additional info, like QOP settings for SASL 
> etc. would be great too. Right now the UI is very limited and shows 
> {{true/false}} for, for example, {{Compact Transport}}. It'd suggest to 
> change this to show something more useful like this:
> {noformat}
> Thrift Impl Type: non-blocking
> Protocol: Binary
> Transport: Framed
> QOP: Authentication & Confidential
> {noformat}
> or
> {noformat}
> Protocol: Binary + HTTP
> Transport: Standard
> QOP: none
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on issue #883: HBASE-22920 github pr testing job should use dev-support script

2019-11-27 Thread GitBox
Apache-HBase commented on issue #883: HBASE-22920 github pr testing job should 
use dev-support script
URL: https://github.com/apache/hbase/pull/883#issuecomment-558993177
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  5s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  Maven dependency ordering for branch  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   ||| _ Other Tests _ |
   | +0 :ok: |  asflicense  |   0m  0s |  ASF License check generated no 
output?  |
   |  |   |   2m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.5 Server=19.03.5 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-883/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/883 |
   | Optional Tests | dupname asflicense shellcheck shelldocs |
   | uname | Linux 4cd77387aa06 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/HBase-PreCommit-GitHub-PR_PR-883/out/precommit/personality/provided.sh
 |
   | git revision | master / 636fa2c6b0 |
   | Max. process+thread count | 52 (vs. ulimit of 1) |
   | modules | C:  U:  |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-883/1/console |
   | versions | git=2.11.0 maven=2018-06-17T18:33:14Z) shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] bea0113 opened a new pull request #883: HBASE-22920 github pr testing job should use dev-support script

2019-11-27 Thread GitBox
bea0113 opened a new pull request #883: HBASE-22920 github pr testing job 
should use dev-support script
URL: https://github.com/apache/hbase/pull/883
 
 
   the PR tester Jenkinsfile_GitHub has its own set of commands for gathering 
information about the build environment it runs in. Instead it should rely on 
the dev-support/gather_machine_environment.sh that gets used by nightly


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] wchevreuil merged pull request #864: HBASE-23313 [hbck2] setRegionState should update Master in-memory sta…

2019-11-27 Thread GitBox
wchevreuil merged pull request #864: HBASE-23313 [hbck2] setRegionState should 
update Master in-memory sta…
URL: https://github.com/apache/hbase/pull/864
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services