[GitHub] [hbase] Apache-HBase commented on pull request #1978: HBASE-24638 Edit doc on (offheap) memory management

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1978:
URL: https://github.com/apache/hbase/pull/1978#issuecomment-650010126


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 18s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 41s |  master passed  |
   | +1 :green_heart: |  compile  |   3m  4s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 34s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 15s |  root in master failed.  |
   | -0 :warning: |  javadoc  |   0m 17s |  hbase-common in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 11s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 29s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m  5s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m  5s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 25s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 16s |  hbase-common in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 14s |  root in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   1m 18s |  root in the patch failed.  |
   |  |   |  33m 48s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.9 Server=19.03.9 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1978/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1978 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux bbc62b9c84cb 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / c0461207ee |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1978/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-root.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1978/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1978/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1978/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-root.txt
 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1978/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-root.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1978/1/testReport/
 |
   | Max. process+thread count | 216 (vs. ulimit of 12500) |
   | modules | C: hbase-common . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1978/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1970:
URL: https://github.com/apache/hbase/pull/1970#issuecomment-650003425


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 58s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 49s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 17s |  hbase-asyncfs in master failed.  |
   | -0 :warning: |  javadoc  |   0m 42s |  hbase-server in master failed.  |
   | -0 :warning: |  patch  |   7m 11s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  7s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 23s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 45s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 17s |  hbase-asyncfs in the patch failed.  
|
   | -0 :warning: |  javadoc  |   0m 39s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 25s |  hbase-asyncfs in the patch passed. 
 |
   | +1 :green_heart: |  unit  | 128m 28s |  hbase-server in the patch passed.  
|
   |  |   | 157m 57s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1970 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux b55e15b83043 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 84e246f9b1 |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-asyncfs.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-asyncfs.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/3/testReport/
 |
   | Max. process+thread count | 3870 (vs. ulimit of 12500) |
   | modules | C: hbase-asyncfs hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/3/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] saintstack opened a new pull request #1978: HBASE-24638 Edit doc on (offheap) memory management

2020-06-25 Thread GitBox


saintstack opened a new pull request #1978:
URL: https://github.com/apache/hbase/pull/1978


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-24638) Edit doc on (offheap) memory management

2020-06-25 Thread Michael Stack (Jira)
Michael Stack created HBASE-24638:
-

 Summary: Edit doc on (offheap) memory management
 Key: HBASE-24638
 URL: https://issues.apache.org/jira/browse/HBASE-24638
 Project: HBase
  Issue Type: Sub-task
  Components: documentation
Reporter: Michael Stack


Gave it a read over to try and figure current state of memory management in 
hbase-2.3.0. Updated it to reflect more of what the current state is.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase-native-client] bharathv commented on a change in pull request #6: HBASE-23105: Download lib double conversion, fizz, update folly

2020-06-25 Thread GitBox


bharathv commented on a change in pull request #6:
URL: https://github.com/apache/hbase-native-client/pull/6#discussion_r445974129



##
File path: CMakeLists.txt
##
@@ -117,23 +124,22 @@ endif()
 
 
include_directories("${JAVA_HBASE_DIR}/hbase-common/target/generated-sources/native/")
 
-## Validate that we have C++ 14 support
+## Validate that we have C++ 17 support

Review comment:
   Is this for std::optional?

##
File path: CMakeLists.txt
##
@@ -154,6 +160,9 @@ endif (OPENSSL_FOUND)
 
 
 if (DOWNLOAD_DEPENDENCIES)
+  download_doubleconversion(${CMAKE_CURRENT_SOURCE_DIR} 
${CMAKE_CURRENT_BINARY_DIR})
+  download_fizz(${CMAKE_CURRENT_SOURCE_DIR} ${CMAKE_CURRENT_BINARY_DIR})

Review comment:
   This needs corresponding changes in Dockerfile?

##
File path: cmake/DownloadCyrusSasl.cmake
##
@@ -0,0 +1,37 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+## Download Cyrus SASL
+## SOURCE_DIR is typically the cmake source directory
+## BINARY_DIR is the build directory, typically 'build'
+
+
+function(download_cyrus_sasl SOURCE_DIR BUILD_DIR)
+  ExternalProject_Add(
+  cyrussasl
+  URL 
"https://github.com/cyrusimap/cyrus-sasl/releases/download/cyrus-sasl-2.1.27/cyrus-sasl-2.1.27.tar.gz";
+  PREFIX "${BUILD_DIR}/dependencies"
+  SOURCE_DIR "${BUILD_DIR}/dependencies/cyrussasl-src"
+  BINARY_DIR ${BUILD_DIR}/dependencies/cyrussasl-src/
+  CONFIGURE_COMMAND ./configure --enable-static --with-pic 
--prefix=${BUILD_DIR}/dependencies/cyrussasl-install
+"CFLAGS=-fPIC"
+   "CXXFLAGS=${CMAKE_CXX_FLAGS} -fPIC"
+  )
+  add_library(sasl2 STATIC IMPORTED)

Review comment:
   Move this to CMakeLists like other dependencies?

##
File path: cmake/DownloadWangle.cmake
##
@@ -28,16 +29,21 @@ function(download_wangle SOURCE_DIR BUILD_DIR)
   else()
 set(PATCH_FOLLY "")
   endif() 
-   
+
   ExternalProject_Add(
- facebook-wangle-proj
- URL "https://github.com/facebook/wangle/archive/v2017.09.04.00.tar.gz";
- PREFIX "${BUILD_DIR}/dependencies"
- DOWNLOAD_DIR ${WANGLE_DOWNLOAD_DIR}
- SOURCE_DIR ${WANGLE_SOURCE_DIR}
- PATCH_COMMAND ${PATCH_FOLLY}
- INSTALL_DIR ${WANGLE_INSTALL_DIR}
- CONFIGURE_COMMAND ${CMAKE_COMMAND} -DBUILD_EXAMPLES=OFF 
-DCMAKE_CROSSCOMPILING=ON -DBUILD_TESTS=OFF -DFOLLY_ROOT_DIR=${FOLLY_ROOT_DIR} 
-DCMAKE_INSTALL_PREFIX:PATH=${WANGLE_INSTALL_DIR} "${WANGLE_SOURCE_DIR}/wangle" 
# Tell CMake to use subdirectory as source.
+ facebook-wangle-proj
+   GIT_REPOSITORY "https://github.com/facebook/wangle.git";
+   GIT_TAG "v2020.05.18.00"
+   SOURCE_DIR "${BUILD_DIR}/dependencies/facebook-wangle-proj-src"

Review comment:
   Same comment as above, you've undone the PREFIX path, so just want to be 
sure that the build directory is clean.

##
File path: cmake/DownloadDoubleConversion.cmake
##
@@ -0,0 +1,34 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+## Download facebook's folly library. 
+## SOURCE_DIR is typically the cmake source directory
+function(download_doubleconversion SOURCE_DIR BUILD_DIR)

Review comment:
   nit: 2-4 indentation (sorry for nit-picking :-))

##
File path: cmake/patches/zookeeper.3.4.14.buf
##
@@ -0,0 +1,4 @@
+3480c3480
+< static char buf[128];

Review comment:
   unused?

##
File path: cmake/ProtobufGen.cmake
##
@@ -36,6 +36,7 @@ function(ge

[jira] [Commented] (HBASE-24382) Flush partial stores of region filtered by seqId when archive wal due to too many wals

2020-06-25 Thread Zheng Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17146025#comment-17146025
 ] 

Zheng Wang commented on HBASE-24382:


Thanks a lot.

> Flush partial stores of region filtered by seqId when archive wal due to too 
> many wals
> --
>
> Key: HBASE-24382
> URL: https://issues.apache.org/jira/browse/HBASE-24382
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Zheng Wang
>Assignee: Zheng Wang
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> When the logRoller archive the oldest wal due to too many wals, if a region 
> should be flushed, we flush all stores of it, but it is not necessary, maybe 
> we can use unflushedSeqId of store to filter them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] bsglz commented on pull request #1737: HBASE-24382 Flush partial stores of region filtered by seqId when arc…

2020-06-25 Thread GitBox


bsglz commented on pull request #1737:
URL: https://github.com/apache/hbase/pull/1737#issuecomment-649984842


   Thanks all the comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24623) SIGSEGV v ~StubRoutines::jbyte_disjoint_arraycopy

2020-06-25 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17146019#comment-17146019
 ] 

Michael Stack commented on HBASE-24623:
---

No offheap BC, no offheap write path, all defaults. I do notice though that 
hbase-2.3.0 is first release with ByteBufferAllocator vs ByteBufferPool; the 
former does refcounting where the latter did not.

> SIGSEGV v  ~StubRoutines::jbyte_disjoint_arraycopy
> --
>
> Key: HBASE-24623
> URL: https://issues.apache.org/jira/browse/HBASE-24623
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Michael Stack
>Priority: Major
>
> In testing, 1% of a decent cluster went down with this seg fault in the vm:
> {code}
> # A fatal error has been detected by the Java Runtime Environment:
> #
> #  SIGSEGV (0xb) at pc=0x7f6659052410, pid=37208, tid=0x7f3c89453700
> #
> # JRE version: OpenJDK Runtime Environment (8.0_232-b09) (build 1.8.0_232-b09)
> # Java VM: OpenJDK 64-Bit Server VM (25.232-b09 mixed mode linux-amd64 )
> # Problematic frame:
> # v  ~StubRoutines::jbyte_disjoint_arraycopy
> {code}
> Looking in the hs_err log, the crash happens in the same area. Here are a few 
> of the stack traces:
> {code}
> Stack: [0x7f3c89353000,0x7f3c89454000],  sp=0x7f3c89452110,  free 
> space=1020k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> v  ~StubRoutines::jbyte_disjoint_arraycopy
> J 17674 C2 
> org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray([BLjava/nio/ByteBuffer;III)V
>  (69 bytes) @ 0x7f665af000d1 [0x7f665aefffe0+0xf1]
> J 17732 C1 
> org.apache.hadoop.hbase.CellUtil.copyQualifierTo(Lorg/apache/hadoop/hbase/Cell;[BI)I
>  (59 bytes) @ 0x7f665bc440dc [0x7f665bc43b80+0x55c]
> j  
> org.apache.hadoop.hbase.CellUtil.cloneQualifier(Lorg/apache/hadoop/hbase/Cell;)[B+12
> J 22278 C2 org.apache.hadoop.hbase.ByteBufferKeyValue.getQualifierArray()[B 
> (5 bytes) @ 0x7f6659bd4784 [0x7f6659bd4760+0x24]
> j  
> org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(Lorg/apache/hadoop/hbase/Cell;Ljava/util/function/Function;)Ljava/lang/String;+97
> j  
> org.apache.hadoop.hbase.CellUtil.getCellKeyAsString(Lorg/apache/hadoop/hbase/Cell;)Ljava/lang/String;+6
> j  
> org.apache.hadoop.hbase.CellUtil.toString(Lorg/apache/hadoop/hbase/Cell;Z)Ljava/lang/String;+16
> j  org.apache.hadoop.hbase.ByteBufferKeyValue.toString()Ljava/lang/String;+2
> j  
> org.apache.hadoop.hbase.client.Mutation.add(Lorg/apache/hadoop/hbase/Cell;)Lorg/apache/hadoop/hbase/client/Mutation;+28
> J 22605 C2 
> org.apache.hadoop.hbase.client.Put.add(Lorg/apache/hadoop/hbase/Cell;)Lorg/apache/hadoop/hbase/client/Put;
>  (8 bytes) @ 0x7f665a982a04 [0x7f665a9829e0+0x24]
> J 22112 C2 
> org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil.toPut(Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$MutationProto;Lorg/apache/hadoop/hbase/CellScanner;)Lorg/apache/hadoop/hbase/client/Put;
>  (910 bytes) @ 0x7f665c706700 [0x7f665c706000+0x700]
> J 24084 C2 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;Lorg/apache/hadoop/hbase/regionserver/HRegion;Lorg/apache/hadoop/hbase/quotas/OperationQuota;Ljava/util/List;Lorg/apache/hadoop/hbase/CellScanner;Lorg/apache/hadoop/hbase/quotas/ActivePolicyEnforcement;Z)V
>  (646 bytes) @ 0x7f665cc21100 [0x7f665cc20c80+0x480]
> J 14696 C2 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(Lorg/apache/hadoop/hbase/regionserver/HRegion;Lorg/apache/hadoop/hbase/quotas/OperationQuota;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionAction;Lorg/apache/hadoop/hbase/CellScanner;Lorg/apache/hadoop/hbase/shaded/protobuf/generated/ClientProtos$RegionActionResult$Builder;Ljava/util/List;JLorg/apache/hadoop/hbase/regionserver/RSRpcServices$RegionScannersCloseCallBack;Lorg/apache/hadoop/hbase/ipc/RpcCallContext;Lorg/apache/hadoop/hbase/quotas/ActivePolicyEnforcement;)Ljava/util/List;
>  (901 bytes) @ 0x7f665b722148 [0x7f665b7218e0+0x868]
> {code}
> Here's another:
> {code}
> Stack: [0x7edd015e2000,0x7edd016e3000],  sp=0x7edd016e11b0,  free 
> space=1020k
> Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
> code)
> v  ~StubRoutines::jbyte_disjoint_arraycopy
> J 18255 C2 
> org.apache.hadoop.hbase.util.ByteBufferUtils.copyFromBufferToArray([BLjava/nio/ByteBuffer;III)V
>  (69 bytes) @ 0x7f06d2593551 [0x7f06d2593460+0xf1]
> j  
> org.apache.hadoop.hbase.PrivateCellUtil.copyTagsTo(Lorg/apache/hadoop/hbase/Cell;[BI)I+31
> j  
> org.apache.hadoop.hbase.CellUtil.cloneTags(Lorg/apache/hadoop/hbase/Cell;)[B+12
> j  org.apache.ha

[jira] [Resolved] (HBASE-24382) Flush partial stores of region filtered by seqId when archive wal due to too many wals

2020-06-25 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-24382.
---
Fix Version/s: 3.0.0-alpha-1
 Hadoop Flags: Reviewed
   Resolution: Fixed

Thanks for the persistence [~filtertip]. Applied to master. I'd be game for 
backporting to branch-2 but too late for branch-2.3 I'd say. Thanks.

> Flush partial stores of region filtered by seqId when archive wal due to too 
> many wals
> --
>
> Key: HBASE-24382
> URL: https://issues.apache.org/jira/browse/HBASE-24382
> Project: HBase
>  Issue Type: Improvement
>  Components: wal
>Reporter: Zheng Wang
>Assignee: Zheng Wang
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> When the logRoller archive the oldest wal due to too many wals, if a region 
> should be flushed, we flush all stores of it, but it is not necessary, maybe 
> we can use unflushedSeqId of store to filter them.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack merged pull request #1737: HBASE-24382 Flush partial stores of region filtered by seqId when arc…

2020-06-25 Thread GitBox


saintstack merged pull request #1737:
URL: https://github.com/apache/hbase/pull/1737


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] saintstack commented on a change in pull request #1737: HBASE-24382 Flush partial stores of region filtered by seqId when arc…

2020-06-25 Thread GitBox


saintstack commented on a change in pull request #1737:
URL: https://github.com/apache/hbase/pull/1737#discussion_r445967514



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
##
@@ -2386,6 +2386,16 @@ public FlushResult flush(boolean force) throws 
IOException {
 boolean isCompactionNeeded();
   }
 
+  public FlushResultImpl flushcache(boolean flushAllStores, boolean 
writeFlushRequestWalMarker,
+FlushLifeCycleTracker tracker) throws IOException {
+List families = null;
+if (flushAllStores) {
+  families = new ArrayList();
+  families.addAll(this.getTableDescriptor().getColumnFamilyNames());
+}
+return this.flushcache(families, writeFlushRequestWalMarker, tracker);
+  }

Review comment:
   Good





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24603) Zookeeper sync() call is async

2020-06-25 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17146009#comment-17146009
 ] 

HBase QA commented on HBASE-24603:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m  
9s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
25s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
20s{color} | {color:green} branch-1 passed with JDK v1.8.0_252 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} branch-1 passed with JDK v1.7.0_262 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
58s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
17s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} branch-1 passed with JDK v1.8.0_252 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} branch-1 passed with JDK v1.7.0_262 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
51s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
35s{color} | {color:green} branch-1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed with JDK v1.8.0_252 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed with JDK v1.7.0_262 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} The patch passed checkstyle in hbase-common {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} hbase-client: The patch generated 0 new + 77 
unchanged - 2 fixed = 77 total (was 79) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
44s{color} | {color:green} The patch passed checkstyle in hbase-server {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
 5s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
4m 53s{color} | {color:green} Patch does not cause any errors with Hadoop 2.8.5 
2.9.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed with JDK v1.8.0_252 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed with JDK v1.7.0_262 {color} |
| {color:g

[GitHub] [hbase] Apache-HBase commented on pull request #1976: HBASE-24603: Make Zookeeper sync() call synchronous (#1945)

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1976:
URL: https://github.com/apache/hbase/pull/1976#issuecomment-649965322


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  12m  9s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 29s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   8m 25s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  branch-1 passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  compile  |   1m 32s |  branch-1 passed with JDK 
v1.7.0_262  |
   | +1 :green_heart: |  checkstyle  |   2m 58s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   3m 17s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  branch-1 passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  branch-1 passed with JDK 
v1.7.0_262  |
   | +0 :ok: |  spotbugs  |   2m 51s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 35s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  the patch passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javac  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  the patch passed with JDK 
v1.7.0_262  |
   | +1 :green_heart: |  javac  |   1m 32s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  The patch passed checkstyle 
in hbase-common  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  hbase-client: The patch 
generated 0 new + 77 unchanged - 2 fixed = 77 total (was 79)  |
   | +1 :green_heart: |  checkstyle  |   1m 44s |  The patch passed checkstyle 
in hbase-server  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   3m  5s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 53s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  the patch passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  the patch passed with JDK 
v1.7.0_262  |
   | +1 :green_heart: |  findbugs  |   5m 59s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 37s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   2m 37s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 146m 10s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   1m  6s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 222m  5s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1976/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1976 |
   | JIRA Issue | HBASE-24603 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 6ee779511641 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1976/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / 54c38c8 |
   | Default Java | 1.7.0_262 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-8-amd64:1.8.0_252 
/usr/lib/jvm/zulu-7-amd64:1.7.0_262 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1976/2/testReport/
 |
   | Max. process+thread count | 5089 (vs. ulimit of 1) |
   | modules | C: hbase-common hbase-client hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1976/2/console |
   | versions | git=1.9.1 maven=3.0.5 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache G

[GitHub] [hbase] Apache-HBase commented on pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1970:
URL: https://github.com/apache/hbase/pull/1970#issuecomment-649960349


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m  9s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 32s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 47s |  master passed  |
   | -0 :warning: |  patch  |   2m 26s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 48s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 10s |  hbase-asyncfs: The patch 
generated 2 new + 1 unchanged - 0 fixed = 3 total (was 1)  |
   | -0 :warning: |  checkstyle  |   1m 15s |  hbase-server: The patch 
generated 8 new + 38 unchanged - 0 fixed = 46 total (was 38)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  12m 36s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 24s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  41m 29s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1970 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux 699d6a87ab5d 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 84e246f9b1 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-asyncfs.txt
 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 84 (vs. ulimit of 12500) |
   | modules | C: hbase-asyncfs hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1970/3/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] comnetwork edited a comment on pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.

2020-06-25 Thread GitBox


comnetwork edited a comment on pull request #1970:
URL: https://github.com/apache/hbase/pull/1970#issuecomment-649947973


   
   
   > > > That is to say, AsyncFSWAL.getLogFileSizeIfBeingWritten could not 
reflect the file length which successfully synced to underlying HDFS, which is 
not as expected.
   > > 
   > > 
   > > Wasn't that intentional, as a mean to proper track WAL files still open 
for write? For example, in case of replication, it should go as far as any 
entry got already appended, no? Ping @Apache9 who worked on this before to give 
more thoughts.
   > 
   > This guy contacted me offline and I confirmed that this should be a 
problem.
   > 
   > What I can recall is that, when doing some bug fixes and improving the 
performance in AsyncFSWAL, I changed the way we calculate the length of the 
writer. Maybe I forget the assumption in HBASE-14004 when doing these changes 
and lead to the problem.
   > 
   > So @comnetwork , please add more comments to say why we need the 
getSyncedLength method in the WAL.Writer interface? So later people will not 
break it again.
   > 
   > Thanks.
   
   @Apache9 , I already added comments for` WriteBase.getSyncedLength` like 
following:
   
/**
 * NOTE: We add this method for {@link WALFileLengthProvider} used for 
replication, considering the
 * case if we use {@link AsyncFSWAL},we write to 3 DNs 
concurrently,according to the visibility
 * guarantee of HDFS, the data will be available immediately when arriving 
at DN since all the DNs
 * will be considered as the last one in pipeline. This means replication 
may read uncommitted data
 * and replicate it to the remote cluster and cause data inconsistency.
 * The method {@link WriterBase#getLength} may return length which just in 
hdfs client buffer and not
 * successfully synced to HDFS, so we use this method to return the length 
successfully synced to HDFS
 * and replication thread could only read writing WAL file limited by this 
length.
 * see also HBASE-14004 and this document for more details:
 * 
https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit#
 */



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] comnetwork commented on pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.

2020-06-25 Thread GitBox


comnetwork commented on pull request #1970:
URL: https://github.com/apache/hbase/pull/1970#issuecomment-649947973


   
   
   
   
   > > > That is to say, AsyncFSWAL.getLogFileSizeIfBeingWritten could not 
reflect the file length which successfully synced to underlying HDFS, which is 
not as expected.
   > > 
   > > 
   > > Wasn't that intentional, as a mean to proper track WAL files still open 
for write? For example, in case of replication, it should go as far as any 
entry got already appended, no? Ping @Apache9 who worked on this before to give 
more thoughts.
   > 
   > This guy contacted me offline and I confirmed that this should be a 
problem.
   > 
   > What I can recall is that, when doing some bug fixes and improving the 
performance in AsyncFSWAL, I changed the way we calculate the length of the 
writer. Maybe I forget the assumption in HBASE-14004 when doing these changes 
and lead to the problem.
   > 
   > So @comnetwork , please add more comments to say why we need the 
getSyncedLength method in the WAL.Writer interface? So later people will not 
break it again.
   > 
   > Thanks.
   
   @Apache9 , I already added comments for` WriteBase.getSyncedLength` like 
following:
   
/**
 *NOTE: We add this method for {@link WALFileLengthProvider} used for 
replication, considering the
* case if we use {@link AsyncFSWAL},we write to 3 DNs 
concurrently,according to the visibility
* guarantee of HDFS, the data will be available immediately when arriving 
at DN since all the DNs
* will be considered as the last one in pipeline. This means replication 
may read uncommitted data
* and replicate it to the remote cluster and cause data inconsistency.
* The method {@link WriterBase#getLength} may return length which just in 
hdfs client buffer and not
* successfully synced to HDFS, so we use this method to return the length 
successfully synced to HDFS
* and replication thread could only read writing WAL file limited by this 
length.
* see also HBASE-14004 and this document for more details:
* 
https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit#
*/



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] comnetwork edited a comment on pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.

2020-06-25 Thread GitBox


comnetwork edited a comment on pull request #1970:
URL: https://github.com/apache/hbase/pull/1970#issuecomment-649947973


   
   
   
   > > > That is to say, AsyncFSWAL.getLogFileSizeIfBeingWritten could not 
reflect the file length which successfully synced to underlying HDFS, which is 
not as expected.
   > > 
   > > 
   > > Wasn't that intentional, as a mean to proper track WAL files still open 
for write? For example, in case of replication, it should go as far as any 
entry got already appended, no? Ping @Apache9 who worked on this before to give 
more thoughts.
   > 
   > This guy contacted me offline and I confirmed that this should be a 
problem.
   > 
   > What I can recall is that, when doing some bug fixes and improving the 
performance in AsyncFSWAL, I changed the way we calculate the length of the 
writer. Maybe I forget the assumption in HBASE-14004 when doing these changes 
and lead to the problem.
   > 
   > So @comnetwork , please add more comments to say why we need the 
getSyncedLength method in the WAL.Writer interface? So later people will not 
break it again.
   > 
   > Thanks.
   
   @Apache9 , I already added comments for` WriteBase.getSyncedLength` like 
following:
   
/**
 * NOTE: We add this method for {@link WALFileLengthProvider} used for 
replication, considering the
 * case if we use {@link AsyncFSWAL},we write to 3 DNs 
concurrently,according to the visibility
 * guarantee of HDFS, the data will be available immediately when arriving 
at DN since all the DNs
 * will be considered as the last one in pipeline. This means replication 
may read uncommitted data
 * and replicate it to the remote cluster and cause data inconsistency.
 * The method {@link WriterBase#getLength} may return length which just in 
hdfs client buffer and not
 * successfully synced to HDFS, so we use this method to return the length 
successfully synced to HDFS
 * and replication thread could only read writing WAL file limited by this 
length.
 * see also HBASE-14004 and this document for more details:
 * 
https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit#
 */



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] comnetwork removed a comment on pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.

2020-06-25 Thread GitBox


comnetwork removed a comment on pull request #1970:
URL: https://github.com/apache/hbase/pull/1970#issuecomment-649947198


   @Apache9  , I already added comments for `WriteBase.getSyncedLength` like 
following:
   
/**
 *NOTE: We add this method for {@link WALFileLengthProvider} used for 
replication, considering the
* case if we use {@link AsyncFSWAL},we write to 3 DNs 
concurrently,according to the visibility
* guarantee of HDFS, the data will be available immediately when 
arriving at DN since all the DNs
* will be considered as the last one in pipeline. This means 
replication may read uncommitted data
* and replicate it to the remote cluster and cause data inconsistency.
* The method {@link WriterBase#getLength} may return length which just 
in hdfs client buffer and not
* successfully synced to HDFS, so we use this method to return the 
length successfully synced to HDFS
* and replication thread could only read writing WAL file limited by 
this length.
* see also HBASE-14004 and this document for more details:
* 
https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit#
*/



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] comnetwork commented on pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.

2020-06-25 Thread GitBox


comnetwork commented on pull request #1970:
URL: https://github.com/apache/hbase/pull/1970#issuecomment-649947198


   @Apache9  , I already added comments for `WriteBase.getSyncedLength` like 
following:
   
/**
 *NOTE: We add this method for {@link WALFileLengthProvider} used for 
replication, considering the
* case if we use {@link AsyncFSWAL},we write to 3 DNs 
concurrently,according to the visibility
* guarantee of HDFS, the data will be available immediately when 
arriving at DN since all the DNs
* will be considered as the last one in pipeline. This means 
replication may read uncommitted data
* and replicate it to the remote cluster and cause data inconsistency.
* The method {@link WriterBase#getLength} may return length which just 
in hdfs client buffer and not
* successfully synced to HDFS, so we use this method to return the 
length successfully synced to HDFS
* and replication thread could only read writing WAL file limited by 
this length.
* see also HBASE-14004 and this document for more details:
* 
https://docs.google.com/document/d/11AyWtGhItQs6vsLRIx32PwTxmBY3libXwGXI25obVEY/edit#
*/



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] comnetwork commented on a change in pull request #1970: HBASE-24625 AsyncFSWAL.getLogFileSizeIfBeingWritten does not return the expected synced file length.

2020-06-25 Thread GitBox


comnetwork commented on a change in pull request #1970:
URL: https://github.com/apache/hbase/pull/1970#discussion_r445943229



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/ProtobufLogWriter.java
##
@@ -46,6 +46,10 @@
 
   protected FSDataOutputStream output;
 
+  private volatile long syncedLength = 0;

Review comment:
   seems that using AtomicLong is unnecessary, because AtomicLong could not 
provide `update if greater than` semantics,  so I used `synchronized` keyword 
here when updating the syncedLength for simplicity





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1935: HBASE-22146 SpaceQuotaViolationPolicy Disable is not working in Names…

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1935:
URL: https://github.com/apache/hbase/pull/1935#issuecomment-649921243


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 28s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 45s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 18s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 44s |  hbase-server in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  9s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 16s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 40s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 197m 19s |  hbase-server in the patch passed.  
|
   |  |   | 225m  6s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1935/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1935 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 98b826bb5e33 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 84e246f9b1 |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1935/2/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1935/2/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1935/2/testReport/
 |
   | Max. process+thread count | 3076 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1935/2/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-22504) Optimize the MultiByteBuff#get(ByteBuffer, offset, len)

2020-06-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-22504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145975#comment-17145975
 ] 

Hudson commented on HBASE-22504:


Results for branch branch-2
[build #2719 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Optimize the MultiByteBuff#get(ByteBuffer, offset, len)
> ---
>
> Key: HBASE-22504
> URL: https://issues.apache.org/jira/browse/HBASE-22504
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
> Attachments: HBASE-22504.HBASE-21879.v01.patch
>
>
> In HBASE-22483,  we saw that the BucketCacheWriter thread was quite busy 
> [^BucketCacheWriter-is-busy.png],  the flame graph also indicated that the 
> ByteBufferArray#internalTransfer cost ~6% CPU (see 
> [async-prof-pid-25042-cpu-1.svg|https://issues.apache.org/jira/secure/attachment/12970294/async-prof-pid-25042-cpu-1.svg]).
>   because we used the hbase.ipc.server.allocator.buffer.size=64KB, each 
> HFileBlock will be backend  by a MultiByteBuff: one 64KB offheap ByteBuffer 
> and one small heap ByteBuffer.   
> The path is depending on the MultiByteBuff#get(ByteBuffer, offset, len) now: 
> {code:java}
> RAMQueueEntry#writeToCache
> |--> ByteBufferIOEngine#write
> |--> ByteBufferArray#internalTransfer
> |--> ByteBufferArray$WRITER
> |--> MultiByteBuff#get(ByteBuffer, offset, len)
> {code}
> While the MultiByteBuff#get impl is simple and crude now, can optimze this 
> implementation:
> {code:java}
>   @Override
>   public void get(ByteBuffer out, int sourceOffset,
>   int length) {
> checkRefCount();
>   // Not used from real read path actually. So not going with
>   // optimization
> for (int i = 0; i < length; ++i) {
>   out.put(this.get(sourceOffset + i));
> }
>   }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24600) Empty RegionAction added to MultiRequest in case of RowMutations/CheckAndMutate batch

2020-06-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145973#comment-17145973
 ] 

Hudson commented on HBASE-24600:


Results for branch branch-2
[build #2719 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Empty RegionAction added to MultiRequest in case of 
> RowMutations/CheckAndMutate batch
> -
>
> Key: HBASE-24600
> URL: https://issues.apache.org/jira/browse/HBASE-24600
> Project: HBase
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.1.10, 2.2.6
>
>
> When a client sends RowMutations/CheckAndMutate batch requests, no Action 
> objects are added to the *builder* (RegionAction.Builder), so empty 
> RegionAction is added to MultiRequest at the following line:
> https://github.com/apache/hbase/blob/3c319811799cb4c1f51fb5b43dd4743acd28052c/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java#L593
> We need to check if the *builder* has any Action objects or not here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24616) Remove BoundedRecoveredHFilesOutputSink dependency on a TableDescriptor

2020-06-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145976#comment-17145976
 ] 

Hudson commented on HBASE-24616:


Results for branch branch-2
[build #2719 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove BoundedRecoveredHFilesOutputSink  dependency on a TableDescriptor
> 
>
> Key: HBASE-24616
> URL: https://issues.apache.org/jira/browse/HBASE-24616
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, MTTR
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> BoundedRecoveredHFilesOutputSink wants to read TableDescriptor so it writes 
> the particular hfile format specified by a table's schema. Getting the table 
> schema can be tough at various points of operation especially around startup. 
> HBASE-23739 tried to read from the fs if unable to read TableDescriptor from 
> Master. This approach works generally but fails in standalone mode as in 
> standalone mode we will have given-up our start up attempt BEFORE the request 
> to Master for TableDescriptor times out (the read from fs is never attempted).
> The suggested patch here does away w/ reading TableDescriptor and just has 
> BoundedRecoveredHFilesOutputSink write generic hfiles.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24631) Loosen Dockerfile pinned package versions of the "debian-revision"

2020-06-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145974#comment-17145974
 ] 

Hudson commented on HBASE-24631:


Results for branch branch-2
[build #2719 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Loosen Dockerfile pinned package versions of the "debian-revision"
> --
>
> Key: HBASE-24631
> URL: https://issues.apache.org/jira/browse/HBASE-24631
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0-alpha-1
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> Portions of PR jobs have started failing. From the log, our versions of curl 
> is no longer available in ubuntu package servers.
> {noformat}
> [2020-06-24T21:41:59.524Z] 
> 
> [2020-06-24T21:41:59.524Z] 
> 
> [2020-06-24T21:41:59.524Z]Docker Image Creation
> [2020-06-24T21:41:59.524Z] 
> 
> [2020-06-24T21:41:59.524Z] 
> 
> [2020-06-24T21:41:59.524Z] 
> [2020-06-24T21:41:59.524Z] 
> [2020-06-24T21:41:59.524Z] Sending build context to Docker daemon  18.99kB
> [2020-06-24T21:41:59.524Z] Step 1/60 : FROM ubuntu:18.04 AS BASE_IMAGE
> [2020-06-24T21:42:00.233Z] 18.04: Pulling from library/ubuntu
> [2020-06-24T21:42:00.943Z] Digest: 
> sha256:86510528ab9cd7b64209cbbe6946e094a6d10c6db21def64a93ebdd20011de1d
> [2020-06-24T21:42:00.943Z] Status: Downloaded newer image for ubuntu:18.04
> [2020-06-24T21:42:00.943Z]  ---> 8e4ce0a6ce69
> [2020-06-24T21:42:00.943Z] Step 2/60 : SHELL ["/bin/bash", "-o", "pipefail", 
> "-c"]
> [2020-06-24T21:42:00.943Z]  ---> Using cache
> [2020-06-24T21:42:00.943Z]  ---> 9170e78be248
> [2020-06-24T21:42:00.943Z] Step 3/60 : RUN DEBIAN_FRONTEND=noninteractive 
> apt-get -qq update &&   DEBIAN_FRONTEND=noninteractive apt-get -qq install 
> --no-install-recommends -y ca-certificates=20180409 
> curl=7.58.0-2ubuntu3.8 locales=2.27-3ubuntu1 bash=4.4.18-2ubuntu1.2   
>   build-essential=12.4ubuntu1 diffutils=1:3.6-1 
> git=1:2.17.1-1ubuntu0.7 rsync=3.1.2-2.1ubuntu1 tar=1.29b-2ubuntu0.1   
>   wget=1.19.4-1ubuntu2.2 bats=0.4.0-1.1 libperl-critic-perl=1.130-1   
>   python3=3.6.7-1~18.04 python3-pip=9.0.1-2.3~ubuntu1.18.04.1 
> python3-setuptools=39.0.1-2 ruby=1:2.5.1 ruby-dev=1:2.5.1 
> shellcheck=0.4.6-1 && apt-get clean && rm -rf /var/lib/apt/lists/*
> [2020-06-24T21:42:02.413Z]  ---> Running in 87d1f25abbc4
> [2020-06-24T21:42:11.760Z] E: Version '7.58.0-2ubuntu3.8' for 'curl' was not 
> found
> [2020-06-24T21:42:12.471Z] The command '/bin/bash -o pipefail -c 
> DEBIAN_FRONTEND=noninteractive apt-get -qq update &&   
> DEBIAN_FRONTEND=noninteractive apt-get -qq install --no-install-recommends -y 
> ca-certificates=20180409 curl=7.58.0-2ubuntu3.8 
> locales=2.27-3ubuntu1 bash=4.4.18-2ubuntu1.2 
> build-essential=12.4ubuntu1 diffutils=1:3.6-1 git=1:2.17.1-1ubuntu0.7 
> rsync=3.1.2-2.1ubuntu1 tar=1.29b-2ubuntu0.1 
> wget=1.19.4-1ubuntu2.2 bats=0.4.0-1.1 libperl-critic-perl=1.130-1 
> python3=3.6.7-1~18.04 python3-pip=9.0.1-2.3~ubuntu1.18.04.1 
> python3-setuptools=39.0.1-2 ruby=1:2.5.1 ruby-dev=1:2.5.1 
> shellcheck=0.4.6-1 && apt-get clean && rm -rf 
> /var/lib/apt/lists/*' returned a non-zero code: 100
> [2020-06-24T21:4

[jira] [Commented] (HBASE-24630) Purge dev javadoc from client bin tarball

2020-06-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145972#comment-17145972
 ] 

Hudson commented on HBASE-24630:


Results for branch branch-2
[build #2719 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2719/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Purge dev javadoc from client bin tarball
> -
>
> Key: HBASE-24630
> URL: https://issues.apache.org/jira/browse/HBASE-24630
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.1.10, 2.2.6
>
>
> For 2.0, the decision was made to exclude the bulky "developer" api docs from 
> the binary artifacts, via HBASE-20149. This change needs applied to the 
> client tarball as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1977: HBASE-24221 addendum to restore public interface on LoadIncrementalHFiles

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1977:
URL: https://github.com/apache/hbase/pull/1977#issuecomment-649910706


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 13s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 49s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 31s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 45s |  hbase-server in branch-2 failed.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 36s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 12s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 32s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 42s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 205m  8s |  hbase-server in the patch passed.  
|
   |  |   | 234m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1977/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1977 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux c0d164d430a5 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / e6639f9d4e |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1977/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1977/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1977/1/testReport/
 |
   | Max. process+thread count | 2428 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1977/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1977: HBASE-24221 addendum to restore public interface on LoadIncrementalHFiles

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1977:
URL: https://github.com/apache/hbase/pull/1977#issuecomment-649908257


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 15s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 58s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 28s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 59s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 24s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 202m  2s |  hbase-server in the patch passed.  
|
   |  |   | 226m 45s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1977/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1977 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 69714d8d40b4 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / e6639f9d4e |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1977/1/testReport/
 |
   | Max. process+thread count | 2369 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1977/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-24603) Zookeeper sync() call is async

2020-06-25 Thread Bharath Vissapragada (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharath Vissapragada updated HBASE-24603:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Zookeeper sync() call is async
> --
>
> Key: HBASE-24603
> URL: https://issues.apache.org/jira/browse/HBASE-24603
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver
>Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Critical
> Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0
>
>
> Here is the method that does a sync() of lagging followers with leader in the 
> quorum. We rely on this to see a consistent snapshot of ZK data from multiple 
> clients. However the problem is that the underlying sync() call is actually 
> asynchronous since we are passing a 'null' call back.  See the ZK API 
> [doc|https://zookeeper.apache.org/doc/r3.5.7/apidocs/zookeeper-server/index.html]
>  for details. The end-result is that sync() doesn't guarantee that it has 
> happened by the time it returns.
> {noformat}
>   /**
>* Forces a synchronization of this ZooKeeper client connection.
>* 
>* Executing this method before running other methods will ensure that the
>* subsequent operations are up-to-date and consistent as of the time that
>* the sync is complete.
>* 
>* This is used for compareAndSwap type operations where we need to read the
>* data of an existing node and delete or transition that node, utilizing 
> the
>* previously read version and data.  We want to ensure that the version 
> read
>* is up-to-date from when we begin the operation.
>*/
>   public void sync(String path) throws KeeperException {
> this.recoverableZooKeeper.sync(path, null, null);
>   }
> {noformat}
> We rely on this heavily (at least in the older branches that do ZK based 
> region assignment). In branch-1 we saw weird "BadVersionException" exceptions 
> in RITs because of the inconsistent view of the ZK snapshot. It could 
> manifest differently in other branches. Either way, this is something we need 
> to fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24603) Zookeeper sync() call is async

2020-06-25 Thread Bharath Vissapragada (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharath Vissapragada updated HBASE-24603:
-
Release Note: 


Fixes a couple of bugs in ZooKeeper interaction. Firstly, zk sync() call that 
is used to sync the lagging followers with leader so that the client sees a 
consistent snapshot state was actually asynchronous under the hood. We make it 
synchronous for correctness. Second, zookeeper events are now processed in a 
separate thread rather than doing it in the thread context of zookeeper client 
connection. This decoupling frees up client connection quickly and avoids 
deadlocks.

> Zookeeper sync() call is async
> --
>
> Key: HBASE-24603
> URL: https://issues.apache.org/jira/browse/HBASE-24603
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver
>Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Critical
> Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0
>
>
> Here is the method that does a sync() of lagging followers with leader in the 
> quorum. We rely on this to see a consistent snapshot of ZK data from multiple 
> clients. However the problem is that the underlying sync() call is actually 
> asynchronous since we are passing a 'null' call back.  See the ZK API 
> [doc|https://zookeeper.apache.org/doc/r3.5.7/apidocs/zookeeper-server/index.html]
>  for details. The end-result is that sync() doesn't guarantee that it has 
> happened by the time it returns.
> {noformat}
>   /**
>* Forces a synchronization of this ZooKeeper client connection.
>* 
>* Executing this method before running other methods will ensure that the
>* subsequent operations are up-to-date and consistent as of the time that
>* the sync is complete.
>* 
>* This is used for compareAndSwap type operations where we need to read the
>* data of an existing node and delete or transition that node, utilizing 
> the
>* previously read version and data.  We want to ensure that the version 
> read
>* is up-to-date from when we begin the operation.
>*/
>   public void sync(String path) throws KeeperException {
> this.recoverableZooKeeper.sync(path, null, null);
>   }
> {noformat}
> We rely on this heavily (at least in the older branches that do ZK based 
> region assignment). In branch-1 we saw weird "BadVersionException" exceptions 
> in RITs because of the inconsistent view of the ZK snapshot. It could 
> manifest differently in other branches. Either way, this is something we need 
> to fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (HBASE-24637) Filter SKIP hinting regression

2020-06-25 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145964#comment-17145964
 ] 

Andrew Kyle Purtell edited comment on HBASE-24637 at 6/26/20, 1:47 AM:
---

If you'd like to play around with the instrumentation, see attached patches, 
for branch-1 and branch-2.2, respectively. After applying them add this to 
log4j.properties:

{noformat}
log4j.logger.org.apache.hadoop.hbase.ipc.CallRunner=TRACE
{noformat}

Here's an example output from a scan of {{hbase:meta}}:

{noformat}
2020-06-24 14:45:13,870 TRACE 
[RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=8120] 
ipc.CallRunner: callId: 0 service: ClientService methodName: Scan size: 102
connection: 10.55.111.78:54551 deadline: 1593035713822 successful: true
metrics: [ "block_read_keys": 477 "block_read_ns": 3427040
 "block_reads": 13 "block_seek_ns": 1606370 "block_seeks": 169
 "block_unpack_ns": 10256 "block_unpacks": 13 "cells_matched": 165
 "cells_matched__hbase:meta,,1.1588230740__info": 165
 "column_hint_include": 148 "memstore_next": 72
 "memstore_next_ns": 136671 "memstore_seek": 2
 "memstore_seek_ns": 631629 "reseeks": 36 "sqm_hint_done": 17
 "sqm_hint_include": 74 "sqm_hint_seek_next_col": 74
 "store_next": 276
 "store_next__1c930a35ff8041368a05817adbdcce97": 40
 "store_next__2644194fdf794815abdc940c183dab88": 40
 "store_next__32ce31753fb244668f788fb94ab02dff": 40
 "store_next__61c8423b9d8846c99a61cd2996b5b621": 116
 "store_next__f4f7878c9fcf40d9902416d5c7a4097a": 40
 "store_next_ns": 1891634
 "store_next_ns__1c930a35ff8041368a05817adbdcce97": 269383
 "store_next_ns__2644194fdf794815abdc940c183dab88": 299936
 "store_next_ns__32ce31753fb244668f788fb94ab02dff": 288594
 "store_next_ns__61c8423b9d8846c99a61cd2996b5b621": 594313
 "store_next_ns__f4f7878c9fcf40d9902416d5c7a4097a": 439408
 "store_reseek": 164
 "store_reseek__1c930a35ff8041368a05817adbdcce97": 32
 "store_reseek__2644194fdf794815abdc940c183dab88": 32
 "store_reseek__32ce31753fb244668f788fb94ab02dff": 32
 "store_reseek__61c8423b9d8846c99a61cd2996b5b621": 36
 "store_reseek__f4f7878c9fcf40d9902416d5c7a4097a": 32
 "store_reseek_ns": 2969978
 "store_reseek_ns__1c930a35ff8041368a05817adbdcce97": 359489
 "store_reseek_ns__2644194fdf794815abdc940c183dab88": 595115
 "store_reseek_ns__32ce31753fb244668f788fb94ab02dff": 474642
 "store_reseek_ns__61c8423b9d8846c99a61cd2996b5b621": 1013188
 "store_reseek_ns__f4f7878c9fcf40d9902416d5c7a4097a": 527544
 "store_seek": 5
 "store_seek__1c930a35ff8041368a05817adbdcce97": 1
 "store_seek__2644194fdf794815abdc940c183dab88": 1
 "store_seek__32ce31753fb244668f788fb94ab02dff": 1
 "store_seek__61c8423b9d8846c99a61cd2996b5b621": 1
 "store_seek__f4f7878c9fcf40d9902416d5c7a4097a": 1
 "store_seek_ns": 8862786
 "store_seek_ns__1c930a35ff8041368a05817adbdcce97": 830421
 "store_seek_ns__2644194fdf794815abdc940c183dab88": 585899
 "store_seek_ns__32ce31753fb244668f788fb94ab02dff": 483605
 "store_seek_ns__61c8423b9d8846c99a61cd2996b5b621": 5958072
 "store_seek_ns__f4f7878c9fcf40d9902416d5c7a4097a": 1004789
 "versions_hint_include": 74 "versions_hint_seek_next_col": 74 ]
{noformat}

You can use the attached perl script to aggregate all CallRunner trace logging 
for a given table. Here's an example from the PE --filterAll case over the 100 
column table on HBase 2.2:

{noformat}
COUNT: 72
block_read_ns 16022572976
block_reads 1157963
block_unpack_ns 40930352
block_unpacks 1157963
cached_block_read_ns 191684215
cached_block_reads 1157330
cells_matched 101000
cells_matched__TestTable_f1_c100,,1592866205366.0315e8ccd0024d0460970325194853e1.__info0
 5459454
cells_matched__TestTable_f1_c100,054054,1592866610801.d6f9b09463fc61ff59fcb3daed655f94.__info0
 39100332
cells_matched__TestTable_f1_c100,441186,1592867432331.17af74d64f57ef963aadacbf0ae617db.__info0
 101404202
cells_matched__TestTable_f1_c100,0001445188,1592867920908.92ef6284a81f29d96dd4cb764f6590f3.__info0
 130169002
cells_matched__TestTable_f1_c100,0002733990,1592868498045.7541760abc640c3714dabe8d78fd7a4f.__info0
 124239494
cells_matched__TestTable_f1_c100,0003964084,1592869106036.da6e844c39f943a6d3ed41addf728972.__info0
 139113562
cells_matched__TestTable_f1_c100,0005341446,1592869725318.39a71397e112f7edb7e9bff443ac8913.__info0
 147105288
cells_matched__TestTable_f1_c100,0006797934,1592869725318.8e4d45733bc5e46e8b345d7dd91bd2f0.__info0
 323408666
column_hint_include 10
filter_hint_skip 10
reseeks 1163490
seeker_next 10
seeker_next_ns 92121300195
sqm_hint_done 992
sqm_hint_seek_next_col 99008
sqm_hint_seek_next_row 1000
store_next 998836518
store_next__03573d0d16474c73a315c5d2a1f25986 145479362
store_next__0d570a284b274c7686ebbaacc1786110 128730258
store_next__622d9f19372d478aadb7112dad739106 137576008
sto

[jira] [Commented] (HBASE-24637) Filter SKIP hinting regression

2020-06-25 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145964#comment-17145964
 ] 

Andrew Kyle Purtell commented on HBASE-24637:
-

If you'd like to play around with the instrumentation, see attached patches, 
for branch-1 and branch-2.2, respectively. After applying them add this to 
log4j.properties:

{noformat}
log4j.logger.org.apache.hadoop.hbase.ipc.CallRunner=TRACE
{noformat}

Here's an example output from a scan of {{hbase:meta}}:

{noformat}
2020-06-24 14:45:13,870 TRACE 
[RpcServer.priority.FPBQ.Fifo.handler=19,queue=1,port=8120] ipc.CallRunner: 
callId: 0 service: ClientService methodName: Scan size: 102 connection: 
10.55.111.78:54551 deadline: 1593035713822 successful: true metrics: [ 
"block_read_keys": 477 "block_read_ns": 3427040 "block_reads": 13 
"block_seek_ns": 1606370 "block_seeks": 169 "block_unpack_ns": 10256 
"block_unpacks": 13 "cells_matched": 165 
"cells_matched__hbase:meta,,1.1588230740__info": 165 "column_hint_include": 148 
"memstore_next": 72 "memstore_next_ns": 136671 "memstore_seek": 2 
"memstore_seek_ns": 631629 "reseeks": 36 "sqm_hint_done": 17 
"sqm_hint_include": 74 "sqm_hint_seek_next_col": 74 "store_next": 276 
"store_next__1c930a35ff8041368a05817adbdcce97": 40 
"store_next__2644194fdf794815abdc940c183dab88": 40 
"store_next__32ce31753fb244668f788fb94ab02dff": 40 
"store_next__61c8423b9d8846c99a61cd2996b5b621": 116 
"store_next__f4f7878c9fcf40d9902416d5c7a4097a": 40 "store_next_ns": 1891634 
"store_next_ns__1c930a35ff8041368a05817adbdcce97": 269383 
"store_next_ns__2644194fdf794815abdc940c183dab88": 299936 
"store_next_ns__32ce31753fb244668f788fb94ab02dff": 288594 
"store_next_ns__61c8423b9d8846c99a61cd2996b5b621": 594313 
"store_next_ns__f4f7878c9fcf40d9902416d5c7a4097a": 439408 "store_reseek": 164 
"store_reseek__1c930a35ff8041368a05817adbdcce97": 32 
"store_reseek__2644194fdf794815abdc940c183dab88": 32 
"store_reseek__32ce31753fb244668f788fb94ab02dff": 32 
"store_reseek__61c8423b9d8846c99a61cd2996b5b621": 36 
"store_reseek__f4f7878c9fcf40d9902416d5c7a4097a": 32 "store_reseek_ns": 2969978 
"store_reseek_ns__1c930a35ff8041368a05817adbdcce97": 359489 
"store_reseek_ns__2644194fdf794815abdc940c183dab88": 595115 
"store_reseek_ns__32ce31753fb244668f788fb94ab02dff": 474642 
"store_reseek_ns__61c8423b9d8846c99a61cd2996b5b621": 1013188 
"store_reseek_ns__f4f7878c9fcf40d9902416d5c7a4097a": 527544 "store_seek": 5 
"store_seek__1c930a35ff8041368a05817adbdcce97": 1 
"store_seek__2644194fdf794815abdc940c183dab88": 1 
"store_seek__32ce31753fb244668f788fb94ab02dff": 1 
"store_seek__61c8423b9d8846c99a61cd2996b5b621": 1 
"store_seek__f4f7878c9fcf40d9902416d5c7a4097a": 1 "store_seek_ns": 8862786 
"store_seek_ns__1c930a35ff8041368a05817adbdcce97": 830421 
"store_seek_ns__2644194fdf794815abdc940c183dab88": 585899 
"store_seek_ns__32ce31753fb244668f788fb94ab02dff": 483605 
"store_seek_ns__61c8423b9d8846c99a61cd2996b5b621": 5958072 
"store_seek_ns__f4f7878c9fcf40d9902416d5c7a4097a": 1004789 
"versions_hint_include": 74 "versions_hint_seek_next_col": 74 ]
{noformat}

You can use the attached perl script to aggregate all CallRunner trace logging 
for a given table. Here's an example from the PE --filterAll case over the 100 
column table on HBase 2.2:

{noformat}
COUNT: 72
block_read_ns 16022572976
block_reads 1157963
block_unpack_ns 40930352
block_unpacks 1157963
cached_block_read_ns 191684215
cached_block_reads 1157330
cells_matched 101000
cells_matched__TestTable_f1_c100,,1592866205366.0315e8ccd0024d0460970325194853e1.__info0
 5459454
cells_matched__TestTable_f1_c100,054054,1592866610801.d6f9b09463fc61ff59fcb3daed655f94.__info0
 39100332
cells_matched__TestTable_f1_c100,441186,1592867432331.17af74d64f57ef963aadacbf0ae617db.__info0
 101404202
cells_matched__TestTable_f1_c100,0001445188,1592867920908.92ef6284a81f29d96dd4cb764f6590f3.__info0
 130169002
cells_matched__TestTable_f1_c100,0002733990,1592868498045.7541760abc640c3714dabe8d78fd7a4f.__info0
 124239494
cells_matched__TestTable_f1_c100,0003964084,1592869106036.da6e844c39f943a6d3ed41addf728972.__info0
 139113562
cells_matched__TestTable_f1_c100,0005341446,1592869725318.39a71397e112f7edb7e9bff443ac8913.__info0
 147105288
cells_matched__TestTable_f1_c100,0006797934,1592869725318.8e4d45733bc5e46e8b345d7dd91bd2f0.__info0
 323408666
column_hint_include 10
filter_hint_skip 10
reseeks 1163490
seeker_next 10
seeker_next_ns 92121300195
sqm_hint_done 992
sqm_hint_seek_next_col 99008
sqm_hint_seek_next_row 1000
store_next 998836518
store_next__03573d0d16474c73a315c5d2a1f25986 145479362
store_next__0d570a284b274c7686ebbaacc1786110 128730258
store_next__622d9f19372d478aadb7112dad739106 137576008
store_next__83be76bc01a74b9e9831c743e25b73b8 319835506
stor

[jira] [Updated] (HBASE-24637) Filter SKIP hinting regression

2020-06-25 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-24637:

Attachment: parse_call_trace.pl

> Filter SKIP hinting regression
> --
>
> Key: HBASE-24637
> URL: https://issues.apache.org/jira/browse/HBASE-24637
> Project: HBase
>  Issue Type: Bug
>  Components: Filters, Performance, Scanners
>Affects Versions: 2.2.5
>Reporter: Andrew Kyle Purtell
>Priority: Major
> Attachments: W-7665966-FAST_DIFF-FILTER_ALL.pdf, 
> W-7665966-Instrument-low-level-scan-details-branch-1.patch, 
> W-7665966-Instrument-low-level-scan-details-branch-2.2.patch, 
> parse_call_trace.pl
>
>
> I have been looking into reported performance regressions in HBase 2 relative 
> to HBase 1. Depending on the test scenario, HBase 2 can demonstrate 
> significantly better microbenchmarks in a number of cases, and usually shows 
> improvement in whole cluster benchmarks like YCSB.
> To assist in debugging I added methods to RpcServer for updating per-call 
> metrics that leverage the fact it puts a reference to the current Call into a 
> thread local and that all activity for a given RPC is processed by a single 
> thread context. I then instrumented ScanQueryMatcher (in branch-1) and its 
> various friends (in branch-2.2), StoreScanner, HFileReaderV2 and 
> HFileReaderV3 (in branch-1) and HFileReaderImpl (in branch-2.2), HFileBlock, 
> and DefaultMemStore (branch-1) and SegmentScanner (branch-2.2). Test tables 
> with one family and 1, 5, 10, 20, 50, and 100 distinct column-qualifiers per 
> row were created, snapshot, dropped, and cloned from the snapshot. Both 1.6 
> and 2.2 versions under test operated on identical data files in HDFS. For 
> tests with 1.6 and 2.2 on the server side the same 1.6 PE client was used, to 
> ensure only the server side differed.
> The results for pe --filterAll were revealing. See attached. 
> It appears a refactor to ScanQueryMatcher and friends has disabled the 
> ability of filters to provide meaningful SKIP hints, which disables an 
> optimization that avoids reseeking, leading to a serious and proportional 
> regression in reseek activity and time spent in that code path. So for 
> queries that use filters, there can be a substantial regression.
> Other test cases that did not use filters did not show this regression. If 
> filters are not used the behavior of ScanQueryMatcher between 1.6 and 2.2 was 
> almost identical, as measured by counts of the hint types returned, whether 
> or not column or version trackers are called, and counts of store seeks or 
> reseeks. Regarding micro-timings, there was a 10% variance in my testing and 
> results generally fell within this range, except for the filter all case of 
> course. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24637) Filter SKIP hinting regression

2020-06-25 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-24637:

Attachment: W-7665966-FAST_DIFF-FILTER_ALL.pdf

> Filter SKIP hinting regression
> --
>
> Key: HBASE-24637
> URL: https://issues.apache.org/jira/browse/HBASE-24637
> Project: HBase
>  Issue Type: Bug
>  Components: Filters, Performance, Scanners
>Affects Versions: 2.2.5
>Reporter: Andrew Kyle Purtell
>Priority: Major
> Attachments: W-7665966-FAST_DIFF-FILTER_ALL.pdf, 
> W-7665966-Instrument-low-level-scan-details-branch-1.patch, 
> W-7665966-Instrument-low-level-scan-details-branch-2.2.patch
>
>
> I have been looking into reported performance regressions in HBase 2 relative 
> to HBase 1. Depending on the test scenario, HBase 2 can demonstrate 
> significantly better microbenchmarks in a number of cases, and usually shows 
> improvement in whole cluster benchmarks like YCSB.
> To assist in debugging I added methods to RpcServer for updating per-call 
> metrics that leverage the fact it puts a reference to the current Call into a 
> thread local and that all activity for a given RPC is processed by a single 
> thread context. I then instrumented ScanQueryMatcher (in branch-1) and its 
> various friends (in branch-2.2), StoreScanner, HFileReaderV2 and 
> HFileReaderV3 (in branch-1) and HFileReaderImpl (in branch-2.2), HFileBlock, 
> and DefaultMemStore (branch-1) and SegmentScanner (branch-2.2). Test tables 
> with one family and 1, 5, 10, 20, 50, and 100 distinct column-qualifiers per 
> row were created, snapshot, dropped, and cloned from the snapshot. Both 1.6 
> and 2.2 versions under test operated on identical data files in HDFS. For 
> tests with 1.6 and 2.2 on the server side the same 1.6 PE client was used, to 
> ensure only the server side differed.
> The results for pe --filterAll were revealing. See attached. 
> It appears a refactor to ScanQueryMatcher and friends has disabled the 
> ability of filters to provide meaningful SKIP hints, which disables an 
> optimization that avoids reseeking, leading to a serious and proportional 
> regression in reseek activity and time spent in that code path. So for 
> queries that use filters, there can be a substantial regression.
> Other test cases that did not use filters did not show this regression. If 
> filters are not used the behavior of ScanQueryMatcher between 1.6 and 2.2 was 
> almost identical, as measured by counts of the hint types returned, whether 
> or not column or version trackers are called, and counts of store seeks or 
> reseeks. Regarding micro-timings, there was a 10% variance in my testing and 
> results generally fell within this range, except for the filter all case of 
> course. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24637) Filter SKIP hinting regression

2020-06-25 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-24637:

Attachment: W-7665966-Instrument-low-level-scan-details-branch-1.patch

> Filter SKIP hinting regression
> --
>
> Key: HBASE-24637
> URL: https://issues.apache.org/jira/browse/HBASE-24637
> Project: HBase
>  Issue Type: Bug
>  Components: Filters, Performance, Scanners
>Affects Versions: 2.2.5
>Reporter: Andrew Kyle Purtell
>Priority: Major
> Attachments: W-7665966-FAST_DIFF-FILTER_ALL.pdf, 
> W-7665966-Instrument-low-level-scan-details-branch-1.patch, 
> W-7665966-Instrument-low-level-scan-details-branch-2.2.patch
>
>
> I have been looking into reported performance regressions in HBase 2 relative 
> to HBase 1. Depending on the test scenario, HBase 2 can demonstrate 
> significantly better microbenchmarks in a number of cases, and usually shows 
> improvement in whole cluster benchmarks like YCSB.
> To assist in debugging I added methods to RpcServer for updating per-call 
> metrics that leverage the fact it puts a reference to the current Call into a 
> thread local and that all activity for a given RPC is processed by a single 
> thread context. I then instrumented ScanQueryMatcher (in branch-1) and its 
> various friends (in branch-2.2), StoreScanner, HFileReaderV2 and 
> HFileReaderV3 (in branch-1) and HFileReaderImpl (in branch-2.2), HFileBlock, 
> and DefaultMemStore (branch-1) and SegmentScanner (branch-2.2). Test tables 
> with one family and 1, 5, 10, 20, 50, and 100 distinct column-qualifiers per 
> row were created, snapshot, dropped, and cloned from the snapshot. Both 1.6 
> and 2.2 versions under test operated on identical data files in HDFS. For 
> tests with 1.6 and 2.2 on the server side the same 1.6 PE client was used, to 
> ensure only the server side differed.
> The results for pe --filterAll were revealing. See attached. 
> It appears a refactor to ScanQueryMatcher and friends has disabled the 
> ability of filters to provide meaningful SKIP hints, which disables an 
> optimization that avoids reseeking, leading to a serious and proportional 
> regression in reseek activity and time spent in that code path. So for 
> queries that use filters, there can be a substantial regression.
> Other test cases that did not use filters did not show this regression. If 
> filters are not used the behavior of ScanQueryMatcher between 1.6 and 2.2 was 
> almost identical, as measured by counts of the hint types returned, whether 
> or not column or version trackers are called, and counts of store seeks or 
> reseeks. Regarding micro-timings, there was a 10% variance in my testing and 
> results generally fell within this range, except for the filter all case of 
> course. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24637) Filter SKIP hinting regression

2020-06-25 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-24637:

Attachment: W-7665966-Instrument-low-level-scan-details-branch-2.2.patch

> Filter SKIP hinting regression
> --
>
> Key: HBASE-24637
> URL: https://issues.apache.org/jira/browse/HBASE-24637
> Project: HBase
>  Issue Type: Bug
>  Components: Filters, Performance, Scanners
>Affects Versions: 2.2.5
>Reporter: Andrew Kyle Purtell
>Priority: Major
> Attachments: W-7665966-FAST_DIFF-FILTER_ALL.pdf, 
> W-7665966-Instrument-low-level-scan-details-branch-1.patch, 
> W-7665966-Instrument-low-level-scan-details-branch-2.2.patch
>
>
> I have been looking into reported performance regressions in HBase 2 relative 
> to HBase 1. Depending on the test scenario, HBase 2 can demonstrate 
> significantly better microbenchmarks in a number of cases, and usually shows 
> improvement in whole cluster benchmarks like YCSB.
> To assist in debugging I added methods to RpcServer for updating per-call 
> metrics that leverage the fact it puts a reference to the current Call into a 
> thread local and that all activity for a given RPC is processed by a single 
> thread context. I then instrumented ScanQueryMatcher (in branch-1) and its 
> various friends (in branch-2.2), StoreScanner, HFileReaderV2 and 
> HFileReaderV3 (in branch-1) and HFileReaderImpl (in branch-2.2), HFileBlock, 
> and DefaultMemStore (branch-1) and SegmentScanner (branch-2.2). Test tables 
> with one family and 1, 5, 10, 20, 50, and 100 distinct column-qualifiers per 
> row were created, snapshot, dropped, and cloned from the snapshot. Both 1.6 
> and 2.2 versions under test operated on identical data files in HDFS. For 
> tests with 1.6 and 2.2 on the server side the same 1.6 PE client was used, to 
> ensure only the server side differed.
> The results for pe --filterAll were revealing. See attached. 
> It appears a refactor to ScanQueryMatcher and friends has disabled the 
> ability of filters to provide meaningful SKIP hints, which disables an 
> optimization that avoids reseeking, leading to a serious and proportional 
> regression in reseek activity and time spent in that code path. So for 
> queries that use filters, there can be a substantial regression.
> Other test cases that did not use filters did not show this regression. If 
> filters are not used the behavior of ScanQueryMatcher between 1.6 and 2.2 was 
> almost identical, as measured by counts of the hint types returned, whether 
> or not column or version trackers are called, and counts of store seeks or 
> reseeks. Regarding micro-timings, there was a 10% variance in my testing and 
> results generally fell within this range, except for the filter all case of 
> course. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-24637) Filter SKIP hinting regression

2020-06-25 Thread Andrew Kyle Purtell (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Kyle Purtell updated HBASE-24637:

Affects Version/s: 2.2.5

> Filter SKIP hinting regression
> --
>
> Key: HBASE-24637
> URL: https://issues.apache.org/jira/browse/HBASE-24637
> Project: HBase
>  Issue Type: Bug
>  Components: Filters, Performance, Scanners
>Affects Versions: 2.2.5
>Reporter: Andrew Kyle Purtell
>Priority: Major
>
> I have been looking into reported performance regressions in HBase 2 relative 
> to HBase 1. Depending on the test scenario, HBase 2 can demonstrate 
> significantly better microbenchmarks in a number of cases, and usually shows 
> improvement in whole cluster benchmarks like YCSB.
> To assist in debugging I added methods to RpcServer for updating per-call 
> metrics that leverage the fact it puts a reference to the current Call into a 
> thread local and that all activity for a given RPC is processed by a single 
> thread context. I then instrumented ScanQueryMatcher (in branch-1) and its 
> various friends (in branch-2.2), StoreScanner, HFileReaderV2 and 
> HFileReaderV3 (in branch-1) and HFileReaderImpl (in branch-2.2), HFileBlock, 
> and DefaultMemStore (branch-1) and SegmentScanner (branch-2.2). Test tables 
> with one family and 1, 5, 10, 20, 50, and 100 distinct column-qualifiers per 
> row were created, snapshot, dropped, and cloned from the snapshot. Both 1.6 
> and 2.2 versions under test operated on identical data files in HDFS. For 
> tests with 1.6 and 2.2 on the server side the same 1.6 PE client was used, to 
> ensure only the server side differed.
> The results for pe --filterAll were revealing. See attached. 
> It appears a refactor to ScanQueryMatcher and friends has disabled the 
> ability of filters to provide meaningful SKIP hints, which disables an 
> optimization that avoids reseeking, leading to a serious and proportional 
> regression in reseek activity and time spent in that code path. So for 
> queries that use filters, there can be a substantial regression.
> Other test cases that did not use filters did not show this regression. If 
> filters are not used the behavior of ScanQueryMatcher between 1.6 and 2.2 was 
> almost identical, as measured by counts of the hint types returned, whether 
> or not column or version trackers are called, and counts of store seeks or 
> reseeks. Regarding micro-timings, there was a 10% variance in my testing and 
> results generally fell within this range, except for the filter all case of 
> course. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1935: HBASE-22146 SpaceQuotaViolationPolicy Disable is not working in Names…

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1935:
URL: https://github.com/apache/hbase/pull/1935#issuecomment-649900916


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 47s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 32s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 54s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 37s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 135m 45s |  hbase-server in the patch passed.  
|
   |  |   | 159m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1935/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1935 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 85f5c21e3ae1 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 84e246f9b1 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1935/2/testReport/
 |
   | Max. process+thread count | 4643 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1935/2/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-24637) Filter SKIP hinting regression

2020-06-25 Thread Andrew Kyle Purtell (Jira)
Andrew Kyle Purtell created HBASE-24637:
---

 Summary: Filter SKIP hinting regression
 Key: HBASE-24637
 URL: https://issues.apache.org/jira/browse/HBASE-24637
 Project: HBase
  Issue Type: Bug
  Components: Filters, Performance, Scanners
Reporter: Andrew Kyle Purtell


I have been looking into reported performance regressions in HBase 2 relative 
to HBase 1. Depending on the test scenario, HBase 2 can demonstrate 
significantly better microbenchmarks in a number of cases, and usually shows 
improvement in whole cluster benchmarks like YCSB.

To assist in debugging I added methods to RpcServer for updating per-call 
metrics that leverage the fact it puts a reference to the current Call into a 
thread local and that all activity for a given RPC is processed by a single 
thread context. I then instrumented ScanQueryMatcher (in branch-1) and its 
various friends (in branch-2.2), StoreScanner, HFileReaderV2 and HFileReaderV3 
(in branch-1) and HFileReaderImpl (in branch-2.2), HFileBlock, and 
DefaultMemStore (branch-1) and SegmentScanner (branch-2.2). Test tables with 
one family and 1, 5, 10, 20, 50, and 100 distinct column-qualifiers per row 
were created, snapshot, dropped, and cloned from the snapshot. Both 1.6 and 2.2 
versions under test operated on identical data files in HDFS. For tests with 
1.6 and 2.2 on the server side the same 1.6 PE client was used, to ensure only 
the server side differed.

The results for pe --filterAll were revealing. See attached. 

It appears a refactor to ScanQueryMatcher and friends has disabled the ability 
of filters to provide meaningful SKIP hints, which disables an optimization 
that avoids reseeking, leading to a serious and proportional regression in 
reseek activity and time spent in that code path. So for queries that use 
filters, there can be a substantial regression.

Other test cases that did not use filters did not show this regression. If 
filters are not used the behavior of ScanQueryMatcher between 1.6 and 2.2 was 
almost identical, as measured by counts of the hint types returned, whether or 
not column or version trackers are called, and counts of store seeks or 
reseeks. Regarding micro-timings, there was a 10% variance in my testing and 
results generally fell within this range, except for the filter all case of 
course. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] bharathv merged pull request #1976: HBASE-24603: Make Zookeeper sync() call synchronous (#1945)

2020-06-25 Thread GitBox


bharathv merged pull request #1976:
URL: https://github.com/apache/hbase/pull/1976


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1976: HBASE-24603: Make Zookeeper sync() call synchronous (#1945)

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1976:
URL: https://github.com/apache/hbase/pull/1976#issuecomment-649894137


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 11s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ branch-1 Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 20s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   7m 58s |  branch-1 passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  branch-1 passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  compile  |   1m 33s |  branch-1 passed with JDK 
v1.7.0_262  |
   | +1 :green_heart: |  checkstyle  |   2m 56s |  branch-1 passed  |
   | +1 :green_heart: |  shadedjars  |   3m 17s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  branch-1 passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  branch-1 passed with JDK 
v1.7.0_262  |
   | +0 :ok: |  spotbugs  |   2m 52s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 37s |  branch-1 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  the patch passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javac  |   1m 18s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 33s |  the patch passed with JDK 
v1.7.0_262  |
   | +1 :green_heart: |  javac  |   1m 33s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  The patch passed checkstyle 
in hbase-common  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  hbase-client: The patch 
generated 0 new + 77 unchanged - 2 fixed = 77 total (was 79)  |
   | +1 :green_heart: |  checkstyle  |   1m 43s |  The patch passed checkstyle 
in hbase-server  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedjars  |   3m  5s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  hadoopcheck  |   4m 56s |  Patch does not cause any 
errors with Hadoop 2.8.5 2.9.2.  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  the patch passed with JDK 
v1.8.0_252  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  the patch passed with JDK 
v1.7.0_262  |
   | -1 :x: |  findbugs  |   1m 39s |  hbase-client generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 42s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   2m 50s |  hbase-client in the patch passed.  
|
   | -1 :x: |  unit  | 167m 24s |  hbase-server in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 21s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 232m 29s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-client |
   |  |  Format-string method String.format(String, Object[]) called with 
format string "Invalid event of type {} received for path {}. Ignoring" wants 0 
arguments but is given 2 in 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.processEvent(WatchedEvent)  
At ZooKeeperWatcher.java:with format string "Invalid event of type {} received 
for path {}. Ignoring" wants 0 arguments but is given 2 in 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.processEvent(WatchedEvent)  
At ZooKeeperWatcher.java:[line 666] |
   | Failed junit tests | 
hadoop.hbase.mapreduce.TestSecureLoadIncrementalHFiles |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1976/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1976 |
   | JIRA Issue | HBASE-24603 |
   | Optional Tests | dupname asflicense javac javadoc unit spotbugs findbugs 
shadedjars hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 966ee6e0d634 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 
/home/jenkins/jenkins-slave/workspace/Base-PreCommit-GitHub-PR_PR-1976/out/precommit/personality/provided.sh
 |
   | git revision | branch-1 / 54c38c8 |
   | Default Java | 1.7.0_262 |
 

[jira] [Commented] (HBASE-24603) Zookeeper sync() call is async

2020-06-25 Thread HBase QA (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145957#comment-17145957
 ] 

HBase QA commented on HBASE-24603:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
58s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
22s{color} | {color:green} branch-1 passed with JDK v1.8.0_252 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} branch-1 passed with JDK v1.7.0_262 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
56s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
17s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
17s{color} | {color:green} branch-1 passed with JDK v1.8.0_252 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
31s{color} | {color:green} branch-1 passed with JDK v1.7.0_262 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
52s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
37s{color} | {color:green} branch-1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed with JDK v1.8.0_252 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_262 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} The patch passed checkstyle in hbase-common {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} hbase-client: The patch generated 0 new + 77 
unchanged - 2 fixed = 77 total (was 79) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
43s{color} | {color:green} The patch passed checkstyle in hbase-server {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
 5s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
4m 56s{color} | {color:green} Patch does not cause any errors with Hadoop 2.8.5 
2.9.2. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed with JDK v1.8.0_252 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed with JDK v1.7.0_262 {color} |
| {color:red

[GitHub] [hbase] Apache-HBase commented on pull request #1975: HBASE-24603: Make Zookeeper sync() call synchronous (#1945)

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1975:
URL: https://github.com/apache/hbase/pull/1975#issuecomment-649883607


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 22s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 54s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   2m  0s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   6m 32s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 17s |  hbase-common in branch-2 failed.  |
   | -0 :warning: |  javadoc  |   0m 41s |  hbase-server in branch-2 failed.  |
   | -0 :warning: |  javadoc  |   0m 15s |  hbase-zookeeper in branch-2 failed. 
 |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 53s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 53s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 39s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 16s |  hbase-common in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 16s |  hbase-zookeeper in the patch 
failed.  |
   | -0 :warning: |  javadoc  |   0m 42s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 45s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 43s |  hbase-zookeeper in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 194m 26s |  hbase-server in the patch passed.  
|
   |  |   | 229m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1975/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1975 |
   | JIRA Issue | HBASE-24603 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux a0e212efae01 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / e6639f9d4e |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1975/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1975/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1975/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-zookeeper.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1975/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1975/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-zookeeper.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1975/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1975/1/testReport/
 |
   | Max. process+thread count | 2873 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-zookeeper hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1975/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24631) Loosen Dockerfile pinned package versions of the "debian-revision"

2020-06-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145938#comment-17145938
 ] 

Hudson commented on HBASE-24631:


Results for branch branch-2.3
[build #156 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Loosen Dockerfile pinned package versions of the "debian-revision"
> --
>
> Key: HBASE-24631
> URL: https://issues.apache.org/jira/browse/HBASE-24631
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0-alpha-1
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> Portions of PR jobs have started failing. From the log, our versions of curl 
> is no longer available in ubuntu package servers.
> {noformat}
> [2020-06-24T21:41:59.524Z] 
> 
> [2020-06-24T21:41:59.524Z] 
> 
> [2020-06-24T21:41:59.524Z]Docker Image Creation
> [2020-06-24T21:41:59.524Z] 
> 
> [2020-06-24T21:41:59.524Z] 
> 
> [2020-06-24T21:41:59.524Z] 
> [2020-06-24T21:41:59.524Z] 
> [2020-06-24T21:41:59.524Z] Sending build context to Docker daemon  18.99kB
> [2020-06-24T21:41:59.524Z] Step 1/60 : FROM ubuntu:18.04 AS BASE_IMAGE
> [2020-06-24T21:42:00.233Z] 18.04: Pulling from library/ubuntu
> [2020-06-24T21:42:00.943Z] Digest: 
> sha256:86510528ab9cd7b64209cbbe6946e094a6d10c6db21def64a93ebdd20011de1d
> [2020-06-24T21:42:00.943Z] Status: Downloaded newer image for ubuntu:18.04
> [2020-06-24T21:42:00.943Z]  ---> 8e4ce0a6ce69
> [2020-06-24T21:42:00.943Z] Step 2/60 : SHELL ["/bin/bash", "-o", "pipefail", 
> "-c"]
> [2020-06-24T21:42:00.943Z]  ---> Using cache
> [2020-06-24T21:42:00.943Z]  ---> 9170e78be248
> [2020-06-24T21:42:00.943Z] Step 3/60 : RUN DEBIAN_FRONTEND=noninteractive 
> apt-get -qq update &&   DEBIAN_FRONTEND=noninteractive apt-get -qq install 
> --no-install-recommends -y ca-certificates=20180409 
> curl=7.58.0-2ubuntu3.8 locales=2.27-3ubuntu1 bash=4.4.18-2ubuntu1.2   
>   build-essential=12.4ubuntu1 diffutils=1:3.6-1 
> git=1:2.17.1-1ubuntu0.7 rsync=3.1.2-2.1ubuntu1 tar=1.29b-2ubuntu0.1   
>   wget=1.19.4-1ubuntu2.2 bats=0.4.0-1.1 libperl-critic-perl=1.130-1   
>   python3=3.6.7-1~18.04 python3-pip=9.0.1-2.3~ubuntu1.18.04.1 
> python3-setuptools=39.0.1-2 ruby=1:2.5.1 ruby-dev=1:2.5.1 
> shellcheck=0.4.6-1 && apt-get clean && rm -rf /var/lib/apt/lists/*
> [2020-06-24T21:42:02.413Z]  ---> Running in 87d1f25abbc4
> [2020-06-24T21:42:11.760Z] E: Version '7.58.0-2ubuntu3.8' for 'curl' was not 
> found
> [2020-06-24T21:42:12.471Z] The command '/bin/bash -o pipefail -c 
> DEBIAN_FRONTEND=noninteractive apt-get -qq update &&   
> DEBIAN_FRONTEND=noninteractive apt-get -qq install --no-install-recommends -y 
> ca-certificates=20180409 curl=7.58.0-2ubuntu3.8 
> locales=2.27-3ubuntu1 bash=4.4.18-2ubuntu1.2 
> build-essential=12.4ubuntu1 diffutils=1:3.6-1 git=1:2.17.1-1ubuntu0.7 
> rsync=3.1.2-2.1ubuntu1 tar=1.29b-2ubuntu0.1 
> wget=1.19.4-1ubuntu2.2 bats=0.4.0-1.1 libperl-critic-perl=1.130-1 
> python3=3.6.7-1~18.04 python3-pip=9.0.1-2.3~ubuntu1.18.04.1 
> python3-setuptools=39.0.1-2 ruby=1:2.5.1 ruby-dev=1:2.5.1 
> shellcheck=0.4.6-1 && apt-get clean && rm -rf 
> /var/lib/apt/lists/*' returned a non-zero code: 100
> [2020-06-2

[jira] [Commented] (HBASE-24630) Purge dev javadoc from client bin tarball

2020-06-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145936#comment-17145936
 ] 

Hudson commented on HBASE-24630:


Results for branch branch-2.3
[build #156 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Purge dev javadoc from client bin tarball
> -
>
> Key: HBASE-24630
> URL: https://issues.apache.org/jira/browse/HBASE-24630
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.1.10, 2.2.6
>
>
> For 2.0, the decision was made to exclude the bulky "developer" api docs from 
> the binary artifacts, via HBASE-20149. This change needs applied to the 
> client tarball as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24600) Empty RegionAction added to MultiRequest in case of RowMutations/CheckAndMutate batch

2020-06-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145937#comment-17145937
 ] 

Hudson commented on HBASE-24600:


Results for branch branch-2.3
[build #156 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/General_20Nightly_20Build_20Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.3/156/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Empty RegionAction added to MultiRequest in case of 
> RowMutations/CheckAndMutate batch
> -
>
> Key: HBASE-24600
> URL: https://issues.apache.org/jira/browse/HBASE-24600
> Project: HBase
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.1.10, 2.2.6
>
>
> When a client sends RowMutations/CheckAndMutate batch requests, no Action 
> objects are added to the *builder* (RegionAction.Builder), so empty 
> RegionAction is added to MultiRequest at the following line:
> https://github.com/apache/hbase/blob/3c319811799cb4c1f51fb5b43dd4743acd28052c/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java#L593
> We need to check if the *builder* has any Action objects or not here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24630) Purge dev javadoc from client bin tarball

2020-06-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145933#comment-17145933
 ] 

Hudson commented on HBASE-24630:


Results for branch branch-2.2
[build #902 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/902/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/902//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/902//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/902//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Purge dev javadoc from client bin tarball
> -
>
> Key: HBASE-24630
> URL: https://issues.apache.org/jira/browse/HBASE-24630
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.1.10, 2.2.6
>
>
> For 2.0, the decision was made to exclude the bulky "developer" api docs from 
> the binary artifacts, via HBASE-20149. This change needs applied to the 
> client tarball as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24600) Empty RegionAction added to MultiRequest in case of RowMutations/CheckAndMutate batch

2020-06-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145934#comment-17145934
 ] 

Hudson commented on HBASE-24600:


Results for branch branch-2.2
[build #902 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/902/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/902//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/902//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/902//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Empty RegionAction added to MultiRequest in case of 
> RowMutations/CheckAndMutate batch
> -
>
> Key: HBASE-24600
> URL: https://issues.apache.org/jira/browse/HBASE-24600
> Project: HBase
>  Issue Type: Bug
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.1.10, 2.2.6
>
>
> When a client sends RowMutations/CheckAndMutate batch requests, no Action 
> objects are added to the *builder* (RegionAction.Builder), so empty 
> RegionAction is added to MultiRequest at the following line:
> https://github.com/apache/hbase/blob/3c319811799cb4c1f51fb5b43dd4743acd28052c/hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/RequestConverter.java#L593
> We need to check if the *builder* has any Action objects or not here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1935: HBASE-22146 SpaceQuotaViolationPolicy Disable is not working in Names…

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1935:
URL: https://github.com/apache/hbase/pull/1935#issuecomment-649870516


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  1s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 58s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 26s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   2m 32s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 18s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   1m 23s |  hbase-server: The patch 
generated 5 new + 6 unchanged - 0 fixed = 11 total (was 6)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  14m 52s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 16s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  41m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1935/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1935 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux 1df75ec5df14 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 84e246f9b1 |
   | checkstyle | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1935/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 95 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1935/2/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] bharathv merged pull request #1975: HBASE-24603: Make Zookeeper sync() call synchronous (#1945)

2020-06-25 Thread GitBox


bharathv merged pull request #1975:
URL: https://github.com/apache/hbase/pull/1975


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1975: HBASE-24603: Make Zookeeper sync() call synchronous (#1945)

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1975:
URL: https://github.com/apache/hbase/pull/1975#issuecomment-649867229


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 38s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  6s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 57s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   1m 36s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   5m  3s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 21s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 40s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 40s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 22s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 44s |  hbase-zookeeper in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 139m 26s |  hbase-server in the patch passed.  
|
   |  |   | 168m 36s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1975/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1975 |
   | JIRA Issue | HBASE-24603 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 8aa055ceea81 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / e6639f9d4e |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1975/1/testReport/
 |
   | Max. process+thread count | 3767 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-zookeeper hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1975/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1977: HBASE-24221 addendum to restore public interface on LoadIncrementalHFiles

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1977:
URL: https://github.com/apache/hbase/pull/1977#issuecomment-649858463


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 48s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 10s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  branch-2 passed  |
   | +1 :green_heart: |  spotbugs  |   2m 19s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 36s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  13m  4s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   2m 45s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 16s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  37m 58s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1977/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1977 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux f9d3ae39d3eb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / e6639f9d4e |
   | Max. process+thread count | 95 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1977/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] busbey commented on a change in pull request #1959: HBASE-20819 Use TableDescriptor to replace HTableDescriptor in hbase-shell module

2020-06-25 Thread GitBox


busbey commented on a change in pull request #1959:
URL: https://github.com/apache/hbase/pull/1959#discussion_r445878539



##
File path: hbase-shell/src/main/ruby/hbase_constants.rb
##
@@ -109,8 +109,8 @@ def self.promote_constants(constants)
 end
   end
 
-  promote_constants(org.apache.hadoop.hbase.HColumnDescriptor.constants)
-  promote_constants(org.apache.hadoop.hbase.HTableDescriptor.constants)
+  
promote_constants(org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder.constants)

Review comment:
   Hurm. Some of those still look likely to be useful (e.g. 
MOB_THRESHOLD_BYTES). Can we add them in without bringing back references to 
the classes we're removing?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-11288) Splittable Meta

2020-06-25 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145897#comment-17145897
 ] 

Hudson commented on HBASE-11288:


Results for branch HBASE-11288.splittable-meta
[build #9 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-11288.splittable-meta/9/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-11288.splittable-meta/9/General_20Nightly_20Build_20Report/]






(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-11288.splittable-meta/9/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-11288.splittable-meta/9/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Splittable Meta
> ---
>
> Key: HBASE-11288
> URL: https://issues.apache.org/jira/browse/HBASE-11288
> Project: HBase
>  Issue Type: Umbrella
>  Components: meta
>Reporter: Francis Christopher Liu
>Assignee: Francis Christopher Liu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk commented on pull request #1977: HBASE-24221 addendum to restore public interface on LoadIncrementalHFiles

2020-06-25 Thread GitBox


ndimiduk commented on pull request #1977:
URL: https://github.com/apache/hbase/pull/1977#issuecomment-649842942


   FYI @nyl3532016 @wchevreuil



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk opened a new pull request #1977: HBASE-24221 addendum to restore public interface on LoadIncrementalHFiles

2020-06-25 Thread GitBox


ndimiduk opened a new pull request #1977:
URL: https://github.com/apache/hbase/pull/1977


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-21782) LoadIncrementalHFiles should not be IA.Public

2020-06-25 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145885#comment-17145885
 ] 

Nick Dimiduk commented on HBASE-21782:
--

[~zhangduo] the deprecation comments from this change on 
{{o.a.h.h.tool.LoadIncrementalHFiles}} say "deprecated since 2.2.0, removal in 
3.0.0", which does not meet our guideline re: deprecation for an entire major 
version. I believe we cannot drop this class until 4.0.

> LoadIncrementalHFiles should not be IA.Public
> -
>
> Key: HBASE-21782
> URL: https://issues.apache.org/jira/browse/HBASE-21782
> Project: HBase
>  Issue Type: Task
>  Components: mapreduce
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>  Labels: bulkload
> Fix For: 3.0.0-alpha-1, 2.2.0
>
> Attachments: HBASE-21782-v1.patch, HBASE-21782.patch
>
>
> It is an implementation class, so some of the methods which are only supposed 
> to be used by replication sink are also public to users. And it exposes 
> methods which take Table and Connection as parameter and inside the 
> implementation we assume that they are HTable and ConnectionImplementation, 
> which will be a pain when we want to replace the sync client implementation 
> with async client.
> Here I think we should make the implementation class as 
> IA.LimitPrivate(TOOL), and introduce an interface for bulking hfiles 
> programmatically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (HBASE-24221) Support bulkLoadHFile by family

2020-06-25 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk reopened HBASE-24221:
--

Reopening to address the compatibility change to 
LoadIncrementalHFiles.tryAtomicRegionLoad.

> Support bulkLoadHFile by family
> ---
>
> Key: HBASE-24221
> URL: https://issues.apache.org/jira/browse/HBASE-24221
> Project: HBase
>  Issue Type: Improvement
>  Components: HFile
>Affects Versions: 3.0.0-alpha-1, 2.3.0, 2.2.4
>Reporter: niuyulin
>Assignee: niuyulin
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0, 2.4.0, 2.2.5
>
>
> Support bulkLoadHFile by family to avoid long time waiting of bulkLoadHFile 
> because of compacting at server side



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-9756) HBase shell help info would be better to display only when usage error instead of any exception

2020-06-25 Thread Elliot Miller (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-9756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliot Miller resolved HBASE-9756.
--
Resolution: Duplicate

> HBase shell help info would be better to display only when usage error 
> instead of any exception
> ---
>
> Key: HBASE-9756
> URL: https://issues.apache.org/jira/browse/HBASE-9756
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Yang Wang
>Priority: Minor
> Attachments: HBASE-9756.patch
>
>
> When error occurred in HBase shell, no matter what error it is, the help info 
> will display. As help info is used to instruct how to use the command, so it 
> would be better to show help info only when the command is used in a wrong 
> way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-9756) HBase shell help info would be better to display only when usage error instead of any exception

2020-06-25 Thread Elliot Miller (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-9756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145860#comment-17145860
 ] 

Elliot Miller commented on HBASE-9756:
--

A different patch (HBASE-20270) was merged in 2018 to address this issue. 
*HBASE-20270 removes the command help regardless of what error occurs* and 
instead suggests that a user can get more information with {{help "command"}}. 
I think the change in HBASE-20270 is more beneficial than this patch since the 
help text for individual commands like "alter" (~75 lines) and "scan" (~87 
lines) is long enough to push the relevant error off the screen, which can be 
confusing to the user/operator.

I'm going to close this for now.

> HBase shell help info would be better to display only when usage error 
> instead of any exception
> ---
>
> Key: HBASE-9756
> URL: https://issues.apache.org/jira/browse/HBASE-9756
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Reporter: Yang Wang
>Priority: Minor
> Attachments: HBASE-9756.patch
>
>
> When error occurred in HBase shell, no matter what error it is, the help info 
> will display. As help info is used to instruct how to use the command, so it 
> would be better to show help info only when the command is used in a wrong 
> way.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1737: HBASE-24382 Flush partial stores of region filtered by seqId when arc…

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1737:
URL: https://github.com/apache/hbase/pull/1737#issuecomment-649824525


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  7s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 43s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 28s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 44s |  hbase-server in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 57s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 15s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 37s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 44s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 221m 56s |  hbase-server in the patch passed.  
|
   |  |   | 251m 39s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/7/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1737 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 77ca2801b755 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 1378776a91 |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/7/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/7/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/7/testReport/
 |
   | Max. process+thread count | 2685 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/7/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1975: HBASE-24603: Make Zookeeper sync() call synchronous (#1945)

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1975:
URL: https://github.com/apache/hbase/pull/1975#issuecomment-649822553


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ branch-2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 33s |  branch-2 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 45s |  branch-2 passed  |
   | +1 :green_heart: |  spotbugs  |   3m  9s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 11s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 42s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  11m 37s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1.  |
   | +1 :green_heart: |  spotbugs  |   3m 37s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 38s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  38m  0s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1975/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1975 |
   | JIRA Issue | HBASE-24603 |
   | Optional Tests | dupname asflicense spotbugs hadoopcheck hbaseanti 
checkstyle |
   | uname | Linux c998b8df3e0d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / e6639f9d4e |
   | Max. process+thread count | 94 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-zookeeper hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1975/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) 
spotbugs=3.1.12 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] bharathv opened a new pull request #1976: HBASE-24603: Make Zookeeper sync() call synchronous (#1945)

2020-06-25 Thread GitBox


bharathv opened a new pull request #1976:
URL: https://github.com/apache/hbase/pull/1976


   Writing a test for this is tricky. There is enough coverage for
   functional tests. Only concern is performance, but there is enough
   logging for it to detect timed out/badly performing sync calls.
   
   Additionally, this patch decouples the ZK event processing into it's
   own thread rather than doing it in the EventThread's context. That
   avoids deadlocks and stalls of the event thread.
   
   Signed-off-by: Andrew Purtell 
   Signed-off-by: Viraj Jasani 
   (cherry picked from commit 84e246f9b197bfa4307172db5465214771b78d38)
   (cherry picked from commit 2379a25f0c4f2bdd3ea91fa5e0ba63f034c8d21c)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] bitoffdev commented on a change in pull request #1959: HBASE-20819 Use TableDescriptor to replace HTableDescriptor in hbase-shell module

2020-06-25 Thread GitBox


bitoffdev commented on a change in pull request #1959:
URL: https://github.com/apache/hbase/pull/1959#discussion_r445838230



##
File path: hbase-shell/src/main/ruby/hbase/admin.rb
##
@@ -971,101 +976,103 @@ def enabled?(table_name)
 end
 
 
#--
-# Return a new HColumnDescriptor made of passed args
-def hcd(arg, htd)
+# Return a new ColumnFamilyDescriptor made of passed args
+def hcd(arg, tdb)
   # String arg, single parameter constructor
-  return org.apache.hadoop.hbase.HColumnDescriptor.new(arg) if 
arg.is_a?(String)
+
+  return ColumnFamilyDescriptorBuilder.of(arg) if arg.is_a?(String)
 
   raise(ArgumentError, "Column family #{arg} must have a name") unless 
name = arg.delete(NAME)
 
-  family = htd.getFamily(name.to_java_bytes)
+  cfd = tdb.build.getColumnFamily(name.to_java_bytes)

Review comment:
   I think that if we do choose to add any sort of introspection (like 
hasColumnFamily, getColumnFamily) to the TableDescriptorBuilder itself, that 
would belong in a separate ticket/issue.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1737: HBASE-24382 Flush partial stores of region filtered by seqId when arc…

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1737:
URL: https://github.com/apache/hbase/pull/1737#issuecomment-649816256


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  9s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  9s |  master passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m 12s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 20s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m  4s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 205m 24s |  hbase-server in the patch passed.  
|
   |  |   | 232m 47s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/7/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1737 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux ce8ae991a991 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 1378776a91 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/7/testReport/
 |
   | Max. process+thread count | 2869 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1737/7/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] bitoffdev commented on a change in pull request #1959: HBASE-20819 Use TableDescriptor to replace HTableDescriptor in hbase-shell module

2020-06-25 Thread GitBox


bitoffdev commented on a change in pull request #1959:
URL: https://github.com/apache/hbase/pull/1959#discussion_r445790978



##
File path: hbase-shell/src/main/ruby/hbase/admin.rb
##
@@ -971,101 +976,103 @@ def enabled?(table_name)
 end
 
 
#--
-# Return a new HColumnDescriptor made of passed args
-def hcd(arg, htd)
+# Return a new ColumnFamilyDescriptor made of passed args
+def hcd(arg, tdb)
   # String arg, single parameter constructor
-  return org.apache.hadoop.hbase.HColumnDescriptor.new(arg) if 
arg.is_a?(String)
+
+  return ColumnFamilyDescriptorBuilder.of(arg) if arg.is_a?(String)
 
   raise(ArgumentError, "Column family #{arg} must have a name") unless 
name = arg.delete(NAME)
 
-  family = htd.getFamily(name.to_java_bytes)
+  cfd = tdb.build.getColumnFamily(name.to_java_bytes)

Review comment:
   I think this is the best way to do it at the moment. Currently, it seems 
that TableDescriptorBuilder is intended to be used for writing only, in which 
case we would not want the builder itself to have methods like getColumnFamily 
or hasColumnFamily. This is consistent with the lack of getValue and hasValue 
methods on the builder.
   
   Other than adding methods to builder, the only other way to shortcut the 
handful of calls this patch makes to `tdb.build` would be to cache the 
TableDescriptor at the start of each method that uses a T.D.. I have to 
recommend against this approach since it would technically change the behavior. 
Ex: If you were to execute something in the shell like `alter 't1', {NAME => 
'fam1', METHOD => 'delete'}, {NAME => 'fam1', VERSIONS => 5}` where a C.F. is 
changed multiple times. In this example, a cached T.D. would not reflect the 
deletion of the C.F..
   
   **With these thoughts, I am inclined to leave the patch as-is.**





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-20819) Use TableDescriptor to replace HTableDescriptor in hbase-shell module

2020-06-25 Thread Elliot Miller (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-20819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliot Miller updated HBASE-20819:
--
Fix Version/s: 3.0.0-alpha-1

> Use TableDescriptor to replace HTableDescriptor in hbase-shell module
> -
>
> Key: HBASE-20819
> URL: https://issues.apache.org/jira/browse/HBASE-20819
> Project: HBase
>  Issue Type: Improvement
>  Components: shell
>Affects Versions: 2.0.0
>Reporter: Xiaolin Ha
>Assignee: Elliot Miller
>Priority: Minor
> Fix For: 3.0.0-alpha-1
>
> Attachments: HBASE-20819.branch-2.001.patch, 
> HBASE-20819.branch-2.002.patch, 
> HBaseConstants-b5563432922268c7a16deacbb51bfba89c0a2aba.txt, 
> HBaseConstants-cf2aa593e590133b0c76d3723b4074b28b55dcc9.txt, 
> HBaseConstants-diff.txt
>
>
> HTableDescriptor is deprecated as of release 2.0.0, and will be removed in 
> 3.0.0. This patch replaces all usages of HTableDescriptor and 
> HColumnDescriptor in the hbase-shell module so that HTableDescriptor can be 
> removed.
> There a few other consequences of this change:
>  * Ruby methods relating to HTableDescriptor and HColumnDescriptor have been 
> removed. This is noted in "Release Note" on this issue.
>  * We no longer import constants from HTableDescriptor and HColumnDescriptor 
> into the ruby HBaseConstants module. Instead, we import them from 
> ColumnFamilyDescriptorBuilder and TableDescriptorBuilder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-7196) 'hbase shell status' throws exception when HBase is not running

2020-06-25 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-7196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-7196.
--
Fix Version/s: 3.0.0-alpha-1
 Hadoop Flags: Reviewed
 Assignee: Elliot Miller
   Resolution: Won't Fix

Resolving. Years old. Assigning [~bitoffdev] since he dug in. Looks like we are 
doing ok thing in current context.

> 'hbase shell status' throws exception when HBase is not running
> ---
>
> Key: HBASE-7196
> URL: https://issues.apache.org/jira/browse/HBASE-7196
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.94.1
>Reporter: Oleg Zhurakousky
>Assignee: Elliot Miller
>Priority: Minor
> Fix For: 3.0.0-alpha-1
>
>
> Its kind of a nuisance bug. One would assume that 'status' command should 
> simply  return something along the lines of "HBase is not running"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1945: HBASE-24603: Make Zookeeper sync() call synchronous

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1945:
URL: https://github.com/apache/hbase/pull/1945#issuecomment-649805385


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   6m 43s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 58s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 37s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   6m  3s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 20s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 39s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 39s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   6m 38s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 39s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 44s |  hbase-zookeeper in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 208m 57s |  hbase-server in the patch passed.  
|
   |  |   | 247m 53s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1945/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1945 |
   | JIRA Issue | HBASE-24603 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 50c23ba9ac38 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 1378776a91 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1945/3/testReport/
 |
   | Max. process+thread count | 3220 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-zookeeper hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1945/3/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] bharathv opened a new pull request #1975: HBASE-24603: Make Zookeeper sync() call synchronous (#1945)

2020-06-25 Thread GitBox


bharathv opened a new pull request #1975:
URL: https://github.com/apache/hbase/pull/1975


   Writing a test for this is tricky. There is enough coverage for
   functional tests. Only concern is performance, but there is enough
   logging for it to detect timed out/badly performing sync calls.
   
   Additionally, this patch decouples the ZK event processing into it's
   own thread rather than doing it in the EventThread's context. That
   avoids deadlocks and stalls of the event thread.
   
   Signed-off-by: Andrew Purtell 
   Signed-off-by: Viraj Jasani 
   (cherry picked from commit 84e246f9b197bfa4307172db5465214771b78d38)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] bharathv merged pull request #1945: HBASE-24603: Make Zookeeper sync() call synchronous

2020-06-25 Thread GitBox


bharathv merged pull request #1945:
URL: https://github.com/apache/hbase/pull/1945


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-24636) Increase default Normalizer interval

2020-06-25 Thread Nick Dimiduk (Jira)
Nick Dimiduk created HBASE-24636:


 Summary: Increase default Normalizer interval
 Key: HBASE-24636
 URL: https://issues.apache.org/jira/browse/HBASE-24636
 Project: HBase
  Issue Type: Task
  Components: Normalizer
Reporter: Nick Dimiduk


Our current default interval for the normalizer chore is 5 minutes. I think 
that's super aggressive for a background process that's intended to nudge a 
cluster toward healthy state, considering there's no rate-limiting in place at 
all (HBASE-24628). IIRC, the default used to be 30 minutes. Maybe we decide on 
a new default after we decide on a rate limit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk commented on a change in pull request #1933: HBASE-24588 : Submit task for NormalizationPlan

2020-06-25 Thread GitBox


ndimiduk commented on a change in pull request #1933:
URL: https://github.com/apache/hbase/pull/1933#discussion_r445796219



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/EmptyNormalizationPlan.java
##
@@ -44,7 +44,8 @@ public static EmptyNormalizationPlan getInstance(){
* No-op for empty plan.
*/
   @Override
-  public void execute(Admin admin) {
+  public long submit(MasterServices masterServices) throws IOException {
+return -1;

Review comment:
   Actually, this class isn't even used. I think you can delete it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1972: Backport: HBASE-24552 Replica region needs to check if primary region directory…

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1972:
URL: https://github.com/apache/hbase/pull/1972#issuecomment-649778998


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   4m 44s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 49s |  branch-2.3 passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  branch-2.3 passed  |
   | +1 :green_heart: |  shadedjars  |   5m  7s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  branch-2.3 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 22s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 55s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m  6s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 152m 43s |  hbase-server in the patch passed.  
|
   |  |   | 180m 18s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1972/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1972 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 8d4e5ee5e08c 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.3 / d1edbbcc22 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1972/1/testReport/
 |
   | Max. process+thread count | 3631 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1972/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2020-06-25 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-21879:
--
Release Note: 
Before this issue, read path was 100% offheap when block is in the BucketCache. 
But if a cache miss, then the RS needs to read the block via an on-heap API 
which causes high young-GC pressure.

This issue adds reading the block via offheap even if reading the block from 
filesystem directly.  It requires hadoop version(>=2.9.3) but can also work 
with older hadoop versions (all works but we continue to read block onheap). It 
also requires HBASE-21946 which is not yet in place as of this 
writing/hbase-2.3.0.

We have written a careful doc about the implementation, performance and 
practice here: 
https://docs.google.com/document/d/1xSy9axGxafoH-Qc17zbD2Bd--rWjjI00xTWQZ8ZwI_E/edit#heading=h.nch5d72p27ex

  was:
Before this issue, read path was 100% offheap when block is in the BucketCache. 
But if a cache miss, then the RS needs to read the block via an on-heap API 
which causes high young-GC pressure.

This issue adds reading the block via offheap even if reading the block from 
filesystem directly.  It requires hadoop version(>=2.9.3) but can also work 
with older hadoop versions (all works but we continue to read block onheap). We 
have written a careful doc about the implementation, performance and practice 
here: 
https://docs.google.com/document/d/1xSy9axGxafoH-Qc17zbD2Bd--rWjjI00xTWQZ8ZwI_E/edit#heading=h.nch5d72p27ex


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk commented on a change in pull request #1933: HBASE-24588 : Submit task for NormalizationPlan

2020-06-25 Thread GitBox


ndimiduk commented on a change in pull request #1933:
URL: https://github.com/apache/hbase/pull/1933#discussion_r445191420



##
File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
##
@@ -833,6 +833,9 @@ void unassign(byte[] regionName, boolean force)
 
   /**
* Invoke region normalizer. Can NOT run for various reasons.  Check logs.
+   * This is a non-blocking invocation to region normalizer. If return value 
is true, it means
+   * the invocation was successful. We need to check logs for the details of 
which regions

Review comment:
   nit: "means the request was submitted successfully."

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/EmptyNormalizationPlan.java
##
@@ -44,7 +44,8 @@ public static EmptyNormalizationPlan getInstance(){
* No-op for empty plan.
*/
   @Override
-  public void execute(Admin admin) {
+  public long submit(MasterServices masterServices) throws IOException {
+return -1;

Review comment:
   @huaxiangsun does the pid `-1` carry a special meaning? Just in case...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24627) Normalize one table at a time

2020-06-25 Thread Nick Dimiduk (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145792#comment-17145792
 ] 

Nick Dimiduk commented on HBASE-24627:
--

I'd have a look at existing, similar apis. I think we have a habit of accepting 
a regex for matching table names.

> Normalize one table at a time
> -
>
> Key: HBASE-24627
> URL: https://issues.apache.org/jira/browse/HBASE-24627
> Project: HBase
>  Issue Type: Improvement
>  Components: Normalizer
>Reporter: Nick Dimiduk
>Priority: Major
>
> Out API and shell command around Normalizer is an all-or-nothing invocation. 
> We should support an operator requesting to normalize a table at a time.
> One use-case is for someone wanting to enable the normalizer for the first 
> time. It would be nice to do a controlled roll-out of the normalizer, keeping 
> it disabled at first, calling normalize one table at a time, and then turning 
> it on after all tables have been normalized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1972: Backport: HBASE-24552 Replica region needs to check if primary region directory…

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1972:
URL: https://github.com/apache/hbase/pull/1972#issuecomment-649776738


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   4m 55s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  7s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 28s |  branch-2.3 passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  branch-2.3 passed  |
   | +1 :green_heart: |  shadedjars  |   5m 57s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 41s |  hbase-server in branch-2.3 failed.  
|
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  3s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  6s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m  1s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 52s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 142m 44s |  hbase-server in the patch passed.  
|
   |  |   | 174m 57s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1972/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1972 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux ddcbf0afcc01 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2.3 / d1edbbcc22 |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1972/1/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1972/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1972/1/testReport/
 |
   | Max. process+thread count | 3929 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1972/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-22309) Replace Shipper Interface with Netty's ReferenceCounted; add ExtendCell#retain/ExtendCell#release

2020-06-25 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-22309:
--
Issue Type: Improvement  (was: Bug)
  Priority: Critical  (was: Major)
   Summary: Replace Shipper Interface with Netty's ReferenceCounted; add 
ExtendCell#retain/ExtendCell#release  (was: Replace the Shipper interface by 
using ExtendCell#retain or ExtendCell#release)

> Replace Shipper Interface with Netty's ReferenceCounted; add 
> ExtendCell#retain/ExtendCell#release
> -
>
> Key: HBASE-22309
> URL: https://issues.apache.org/jira/browse/HBASE-22309
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Critical
>
> We've some discussion about the Shipper interface. 
> {code}
> /**
>  * This interface denotes a scanner as one which can ship cells. Scan 
> operation do many RPC requests
>  * to server and fetch N rows/RPC. These are then shipped to client. At the 
> end of every such batch
>  * {@link #shipped()} will get called.
>  */
> @InterfaceAudience.Private
> public interface Shipper {
>   /**
>* Called after a batch of rows scanned and set to be returned to client. 
> Any in between cleanup
>* can be done here.
>*/
>   void shipped() throws IOException;
> }
> {code}
> it seems not an elegant ways...for example: 
> 1.   we want to keep the previous cell in the scanner,  we must deep clone 
> the kv before an shipping, otherwise the ship will free all the ByteBuffers, 
> the prevCell will point to an unknown area. 
> 2.   when switch from PREAD to STREAM in a long scan, we also have accomplish 
> this in shipped() method. if not , once we close the PREAD scanner,  the 
> un-shipped cell will also point to an unknown memory area, because the 
> scanner closing free all ByteBuffes.
> 
> If we change to use refCnt to manage the RPC memory release or retain,  we 
> can just call prevCell.retain.. then it memory won't be free unless the 
> prevCell reach the end of life and call prevCell#release.  I mean we can 
> replace all the shipper logic by using  cell#release and cell#retain.  
> One concern is about the API,  actually the ExtendCell is an pure server side 
> type,  we can make the ExtendCell extend the Netty's ReferenceCounted 
> interface , and provide an retain() and release() methods.   we won't maitain 
> an refCnt in ExtendCell, the refCnt is still in HFileBlock.  Once we encoded 
> an ExtendCell to CellScanner,  we can release the extendCell, and it will 
> release its blocks backend... so in theory,  will have no performance loss... 
> Anyway, that would be a big change,  so maybe need to create another new 
> feature branch to address this. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] bitoffdev commented on a change in pull request #1959: HBASE-20819 Use TableDescriptor to replace HTableDescriptor in hbase-shell module

2020-06-25 Thread GitBox


bitoffdev commented on a change in pull request #1959:
URL: https://github.com/apache/hbase/pull/1959#discussion_r445790978



##
File path: hbase-shell/src/main/ruby/hbase/admin.rb
##
@@ -971,101 +976,103 @@ def enabled?(table_name)
 end
 
 
#--
-# Return a new HColumnDescriptor made of passed args
-def hcd(arg, htd)
+# Return a new ColumnFamilyDescriptor made of passed args
+def hcd(arg, tdb)
   # String arg, single parameter constructor
-  return org.apache.hadoop.hbase.HColumnDescriptor.new(arg) if 
arg.is_a?(String)
+
+  return ColumnFamilyDescriptorBuilder.of(arg) if arg.is_a?(String)
 
   raise(ArgumentError, "Column family #{arg} must have a name") unless 
name = arg.delete(NAME)
 
-  family = htd.getFamily(name.to_java_bytes)
+  cfd = tdb.build.getColumnFamily(name.to_java_bytes)

Review comment:
   I think this is the best way to do it at the moment. Currently, it seems 
that TableDescriptorBuilder is intended to be used for writing only, in which 
case we would not want the builder itself to have methods like getColumnFamily 
or hasColumnFamily. This is consistent with the lack of getValue and setValue 
methods on the builder.
   
   Other than adding methods to builder, the only other way to shortcut the 
handful of calls this patch makes to `tdb.build` would be to cache the 
TableDescriptor at the start of each method that uses a T.D.. I have to 
recommend against this approach since it would technically change the behavior. 
Ex: If you were to execute something in the shell like `alter 't1', {NAME => 
'fam1', METHOD => 'delete'}, {NAME => 'fam1', VERSIONS => 5}` where a C.F. is 
changed multiple times. In this example, a cached T.D. would not reflect the 
deletion of the C.F..
   
   **With these thoughts, I am inclined to leave the patch as-is.**





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2020-06-25 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-21879:
--
Release Note: 
Before this issue, read path was 100% offheap when block is in the BucketCache. 
But if a cache miss, then the RS needs to read the block via an on-heap API 
which causes high young-GC pressure.

This issue adds reading the block via offheap even if reading the block from 
filesystem directly.  It requires hadoop version(>=2.9.3) but can also work 
with older hadoop versions (all works but we continue to read block onheap). We 
have written a careful doc about the implementation, performance and practice 
here: 
https://docs.google.com/document/d/1xSy9axGxafoH-Qc17zbD2Bd--rWjjI00xTWQZ8ZwI_E/edit#heading=h.nch5d72p27ex

  was:
Before this issue, we've made the read path 100% offheap when block is in the 
BucketCache but if a cache miss, then the RS needs to read the block via an 
on-heap API which would causes high young-GC pressure.

This issue adds reading the block via offheap even if reading the block from 
filesystem directly.  It requires hadoop version(>=2.9.3) but can also work 
with older hadoop versions (all works but we continue to read block onheap). We 
have written a careful doc about the implementation, performance and practice 
here: 
https://docs.google.com/document/d/1xSy9axGxafoH-Qc17zbD2Bd--rWjjI00xTWQZ8ZwI_E/edit#heading=h.nch5d72p27ex


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1971: Backport: HBASE-24552 Replica region needs to check if primary region directory…

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1971:
URL: https://github.com/apache/hbase/pull/1971#issuecomment-649773677


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 40s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ branch-2 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 42s |  branch-2 passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  branch-2 passed  |
   | +1 :green_heart: |  shadedjars  |   5m  6s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  branch-2 passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 20s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 56s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   4m 56s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 145m  6s |  hbase-server in the patch passed.  
|
   |  |   | 167m 53s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1971/1/artifact/yetus-jdk8-hadoop2-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1971 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 78d5de6a83ce 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | branch-2 / eb16b4a782 |
   | Default Java | 1.8.0_232 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1971/1/testReport/
 |
   | Max. process+thread count | 3633 (vs. ulimit of 12500) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1971/1/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1945: HBASE-24603: Make Zookeeper sync() call synchronous

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1945:
URL: https://github.com/apache/hbase/pull/1945#issuecomment-649766401


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 16s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 51s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   5m 58s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 20s |  hbase-common in master failed.  |
   | -0 :warning: |  javadoc  |   0m 40s |  hbase-server in master failed.  |
   | -0 :warning: |  javadoc  |   0m 17s |  hbase-zookeeper in master failed.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 48s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 48s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 48s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 19s |  hbase-common in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 18s |  hbase-zookeeper in the patch 
failed.  |
   | -0 :warning: |  javadoc  |   0m 39s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 45s |  hbase-zookeeper in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 129m 35s |  hbase-server in the patch passed.  
|
   |  |   | 161m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1945/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1945 |
   | JIRA Issue | HBASE-24603 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 610148e44d7c 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 1378776a91 |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1945/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-common.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1945/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1945/3/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-zookeeper.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1945/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-common.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1945/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-zookeeper.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1945/3/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1945/3/testReport/
 |
   | Max. process+thread count | 3765 (vs. ulimit of 12500) |
   | modules | C: hbase-common hbase-zookeeper hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1945/3/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-20610) Procedure V2 - Distributed Log Splitting

2020-06-25 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-20610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-20610:
--
Release Note: See RN in HBASE-21588 for detail on this feature. It landed 
in hbase-2.2.0.  (was: See RN in HBASE-21588 for detail on this feature.)

> Procedure V2 - Distributed Log Splitting
> 
>
> Key: HBASE-20610
> URL: https://issues.apache.org/jira/browse/HBASE-20610
> Project: HBase
>  Issue Type: Umbrella
>  Components: proc-v2
>Reporter: Guanghao Zhang
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
> Attachments: HBASE-20610.master.001.patch
>
>
> Now master and regionserver use zk to coordinate log split tasks. The split 
> log manager manages all log files which need to be scanned and split. Then 
> the split log manager places all the logs into the ZooKeeper splitWAL node 
> (/hbase/splitWAL) as tasks and monitors these task nodes and waits for them 
> to be processed. Each regionserver watch splitWAL znode and grab task when 
> node children changed. And regionserver does the work to split the logs.
> Open this umbrella issue to move this "coordinate" work to use new procedure 
> v2 framework and reduce zk depencency. Plan to finish this before 3.0 
> release. Any suggestions are welcomed. Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-20610) Procedure V2 - Distributed Log Splitting

2020-06-25 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-20610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-20610:
--
Fix Version/s: (was: 2.3.0)
   2.2.0

> Procedure V2 - Distributed Log Splitting
> 
>
> Key: HBASE-20610
> URL: https://issues.apache.org/jira/browse/HBASE-20610
> Project: HBase
>  Issue Type: Umbrella
>  Components: proc-v2
>Reporter: Guanghao Zhang
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.2.0
>
> Attachments: HBASE-20610.master.001.patch
>
>
> Now master and regionserver use zk to coordinate log split tasks. The split 
> log manager manages all log files which need to be scanned and split. Then 
> the split log manager places all the logs into the ZooKeeper splitWAL node 
> (/hbase/splitWAL) as tasks and monitors these task nodes and waits for them 
> to be processed. Each regionserver watch splitWAL znode and grab task when 
> node children changed. And regionserver does the work to split the logs.
> Open this umbrella issue to move this "coordinate" work to use new procedure 
> v2 framework and reduce zk depencency. Plan to finish this before 3.0 
> release. Any suggestions are welcomed. Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-20610) Procedure V2 - Distributed Log Splitting

2020-06-25 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-20610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-20610:
--
Release Note: See RN in HBASE-21588 for detail on this feature.  (was: See 
RN in https://issues.apache.org/jira/browse/HBASE-21588)

> Procedure V2 - Distributed Log Splitting
> 
>
> Key: HBASE-20610
> URL: https://issues.apache.org/jira/browse/HBASE-20610
> Project: HBase
>  Issue Type: Umbrella
>  Components: proc-v2
>Reporter: Guanghao Zhang
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.1
>
> Attachments: HBASE-20610.master.001.patch
>
>
> Now master and regionserver use zk to coordinate log split tasks. The split 
> log manager manages all log files which need to be scanned and split. Then 
> the split log manager places all the logs into the ZooKeeper splitWAL node 
> (/hbase/splitWAL) as tasks and monitors these task nodes and waits for them 
> to be processed. Each regionserver watch splitWAL znode and grab task when 
> node children changed. And regionserver does the work to split the logs.
> Open this umbrella issue to move this "coordinate" work to use new procedure 
> v2 framework and reduce zk depencency. Plan to finish this before 3.0 
> release. Any suggestions are welcomed. Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-20610) Procedure V2 - Distributed Log Splitting

2020-06-25 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-20610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-20610:
--
Fix Version/s: (was: 2.3.1)
   2.3.0

> Procedure V2 - Distributed Log Splitting
> 
>
> Key: HBASE-20610
> URL: https://issues.apache.org/jira/browse/HBASE-20610
> Project: HBase
>  Issue Type: Umbrella
>  Components: proc-v2
>Reporter: Guanghao Zhang
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
> Attachments: HBASE-20610.master.001.patch
>
>
> Now master and regionserver use zk to coordinate log split tasks. The split 
> log manager manages all log files which need to be scanned and split. Then 
> the split log manager places all the logs into the ZooKeeper splitWAL node 
> (/hbase/splitWAL) as tasks and monitors these task nodes and waits for them 
> to be processed. Each regionserver watch splitWAL znode and grab task when 
> node children changed. And regionserver does the work to split the logs.
> Open this umbrella issue to move this "coordinate" work to use new procedure 
> v2 framework and reduce zk depencency. Plan to finish this before 3.0 
> release. Any suggestions are welcomed. Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (HBASE-20610) Procedure V2 - Distributed Log Splitting

2020-06-25 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-20610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-20610:
--
Release Note: See RN in https://issues.apache.org/jira/browse/HBASE-21588

> Procedure V2 - Distributed Log Splitting
> 
>
> Key: HBASE-20610
> URL: https://issues.apache.org/jira/browse/HBASE-20610
> Project: HBase
>  Issue Type: Umbrella
>  Components: proc-v2
>Reporter: Guanghao Zhang
>Assignee: Jingyun Tian
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.1
>
> Attachments: HBASE-20610.master.001.patch
>
>
> Now master and regionserver use zk to coordinate log split tasks. The split 
> log manager manages all log files which need to be scanned and split. Then 
> the split log manager places all the logs into the ZooKeeper splitWAL node 
> (/hbase/splitWAL) as tasks and monitors these task nodes and waits for them 
> to be processed. Each regionserver watch splitWAL znode and grab task when 
> node children changed. And regionserver does the work to split the logs.
> Open this umbrella issue to move this "coordinate" work to use new procedure 
> v2 framework and reduce zk depencency. Plan to finish this before 3.0 
> release. Any suggestions are welcomed. Thanks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (HBASE-24585) Failed start recovering crash in standalone mode if procedure-based distributed WAL split & hbase.wal.split.to.hfile=true

2020-06-25 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-24585.
---
Fix Version/s: 2.3.0
   3.0.0-alpha-1
 Assignee: Michael Stack
   Resolution: Not A Problem

Resolving as no longer a problem after HBASE-24616 went in.

> Failed start recovering crash in standalone mode if procedure-based 
> distributed WAL split & hbase.wal.split.to.hfile=true
> -
>
> Key: HBASE-24585
> URL: https://issues.apache.org/jira/browse/HBASE-24585
> Project: HBase
>  Issue Type: Bug
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> (This description got redone after I figured out what was going on. 
> Previously it was just a litany of me banging around trying to learn 
> procedure-based WAL splitting and hbase.wal.split.to.hfile; no one needs to 
> read that; hence the refactor).
> HBASE-24574 procedure-based distributed WAL splitting is enabled and 
> split-to-hflie too. A force crash requires recovery with ServerCrashProcedure 
> splitting old WALs on restart. The recovery fails because we get stuck. The 
> Master can't assign meta because it is being recovered. The recovery can't 
> make progress because it is asking for a table descriptor for meta -- needed 
> by the hbase.wal.split.to.hfile feature -- and the master is not yet 
> initialized.  After the default timeout, Master shuts down because it can't 
> initialize.
> {code}
>  2020-06-18 19:53:54,175 ERROR [main] master.HMasterCommandLine: Master 
> exiting
>  java.lang.RuntimeException: Master not initialized after 20ms
>at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.waitForEvent(JVMClusterUtil.java:232)
>at 
> org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:200)
>at 
> org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:430)
>at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:232)
>at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:140)
>at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:149)
>at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:3059)
> {code}
> The abort of Master interrupts other ongoing actions so later in the log 
> we'll see the WAL split show as interrupted
> {code}
>  2020-06-17 21:20:37,472 ERROR 
> [RS_LOG_REPLAY_OPS-regionserver/localhost:16020-0] 
> handler.RSProcedureHandler: Error when call RSProcedureCallable:
>  java.io.IOException: Failed WAL split, status=RESIGNED, 
> wal=file:/Users/stack/checkouts/hbase.apache.git/tmp/hbase/WALs/localhost,16020,1592440848604-splitting/localhost%2C16020%2C1592440848604.meta.1592440852959.meta
>at 
> org.apache.hadoop.hbase.regionserver.SplitWALCallable.splitWal(SplitWALCallable.java:106)
>at 
> org.apache.hadoop.hbase.regionserver.SplitWALCallable.call(SplitWALCallable.java:86)
>at 
> org.apache.hadoop.hbase.regionserver.SplitWALCallable.call(SplitWALCallable.java:49)
>at 
> org.apache.hadoop.hbase.regionserver.handler.RSProcedureHandler.process(RSProcedureHandler.java:49)
>at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:104)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>at java.lang.Thread.run(Thread.java:748)
> {code}
> This issue becomes how to make hbase.wal.split.to.hfile work in standalone 
> mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] saintstack commented on pull request #1955: HBASE-24616 Remove BoundedRecoveredHFilesOutputSink dependency on a T…

2020-06-25 Thread GitBox


saintstack commented on pull request #1955:
URL: https://github.com/apache/hbase/pull/1955#issuecomment-649763073


   Merged. The failures vary and are flakies.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (HBASE-24616) Remove BoundedRecoveredHFilesOutputSink dependency on a TableDescriptor

2020-06-25 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-24616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack resolved HBASE-24616.
---
Hadoop Flags: Reviewed
Assignee: Michael Stack
  Resolution: Fixed

Pushed on branch-2.3+. Thanks for reviews all.

> Remove BoundedRecoveredHFilesOutputSink  dependency on a TableDescriptor
> 
>
> Key: HBASE-24616
> URL: https://issues.apache.org/jira/browse/HBASE-24616
> Project: HBase
>  Issue Type: Bug
>  Components: HFile, MTTR
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
>
> BoundedRecoveredHFilesOutputSink wants to read TableDescriptor so it writes 
> the particular hfile format specified by a table's schema. Getting the table 
> schema can be tough at various points of operation especially around startup. 
> HBASE-23739 tried to read from the fs if unable to read TableDescriptor from 
> Master. This approach works generally but fails in standalone mode as in 
> standalone mode we will have given-up our start up attempt BEFORE the request 
> to Master for TableDescriptor times out (the read from fs is never attempted).
> The suggested patch here does away w/ reading TableDescriptor and just has 
> BoundedRecoveredHFilesOutputSink write generic hfiles.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-24603) Zookeeper sync() call is async

2020-06-25 Thread Bharath Vissapragada (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145751#comment-17145751
 ] 

Bharath Vissapragada commented on HBASE-24603:
--

Ok, I'll add a release note.

> we have been exposed to potential deadlock in the watcher for a long time?

This patch exposed the deadlock with inline ZK calls in the process method 
(luckily we had test coverage for it). Otherwise, I don't think there is any 
code that was exploiting this. Otherwise we'd have seen deadlocks by now.


> Zookeeper sync() call is async
> --
>
> Key: HBASE-24603
> URL: https://issues.apache.org/jira/browse/HBASE-24603
> Project: HBase
>  Issue Type: Improvement
>  Components: master, regionserver
>Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0
>Reporter: Bharath Vissapragada
>Assignee: Bharath Vissapragada
>Priority: Critical
> Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0
>
>
> Here is the method that does a sync() of lagging followers with leader in the 
> quorum. We rely on this to see a consistent snapshot of ZK data from multiple 
> clients. However the problem is that the underlying sync() call is actually 
> asynchronous since we are passing a 'null' call back.  See the ZK API 
> [doc|https://zookeeper.apache.org/doc/r3.5.7/apidocs/zookeeper-server/index.html]
>  for details. The end-result is that sync() doesn't guarantee that it has 
> happened by the time it returns.
> {noformat}
>   /**
>* Forces a synchronization of this ZooKeeper client connection.
>* 
>* Executing this method before running other methods will ensure that the
>* subsequent operations are up-to-date and consistent as of the time that
>* the sync is complete.
>* 
>* This is used for compareAndSwap type operations where we need to read the
>* data of an existing node and delete or transition that node, utilizing 
> the
>* previously read version and data.  We want to ensure that the version 
> read
>* is up-to-date from when we begin the operation.
>*/
>   public void sync(String path) throws KeeperException {
> this.recoverableZooKeeper.sync(path, null, null);
>   }
> {noformat}
> We rely on this heavily (at least in the older branches that do ZK based 
> region assignment). In branch-1 we saw weird "BadVersionException" exceptions 
> in RITs because of the inconsistent view of the ZK snapshot. It could 
> manifest differently in other branches. Either way, this is something we need 
> to fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #1774: HBASE-24389 Introduce new master rpc methods to locate meta region through root region

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1774:
URL: https://github.com/apache/hbase/pull/1774#issuecomment-649756191


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   4m  1s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-11288.splittable-meta Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 22s |  HBASE-11288.splittable-meta 
passed  |
   | +1 :green_heart: |  compile  |   2m 22s |  HBASE-11288.splittable-meta 
passed  |
   | +1 :green_heart: |  shadedjars  |   5m 35s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  HBASE-11288.splittable-meta 
passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 26s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 35s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 45s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m  1s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 44s |  hbase-zookeeper in the patch 
passed.  |
   | -1 :x: |  unit  | 138m 11s |  hbase-server in the patch failed.  |
   |  |   | 174m 17s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/19/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1774 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 1f66a22ad781 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-11288.splittable-meta / 7074997b01 |
   | Default Java | 1.8.0_232 |
   | unit | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/19/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/19/testReport/
 |
   | Max. process+thread count | 4503 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-zookeeper 
hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/19/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1774: HBASE-24389 Introduce new master rpc methods to locate meta region through root region

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1774:
URL: https://github.com/apache/hbase/pull/1774#issuecomment-649753729


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   3m 58s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-11288.splittable-meta Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m  8s |  HBASE-11288.splittable-meta 
passed  |
   | +1 :green_heart: |  compile  |   2m 51s |  HBASE-11288.splittable-meta 
passed  |
   | +1 :green_heart: |  shadedjars  |   5m 47s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 26s |  hbase-client in 
HBASE-11288.splittable-meta failed.  |
   | -0 :warning: |  javadoc  |   0m 39s |  hbase-server in 
HBASE-11288.splittable-meta failed.  |
   | -0 :warning: |  javadoc  |   0m 17s |  hbase-zookeeper in 
HBASE-11288.splittable-meta failed.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 56s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 50s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 50s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   5m 48s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 27s |  hbase-client in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 16s |  hbase-zookeeper in the patch 
failed.  |
   | -0 :warning: |  javadoc  |   0m 39s |  hbase-server in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 59s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m  9s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 43s |  hbase-zookeeper in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 130m 58s |  hbase-server in the patch passed.  
|
   |  |   | 170m  3s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.12 Server=19.03.12 base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/19/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/1774 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux fb0ed0e1f91c 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-11288.splittable-meta / 7074997b01 |
   | Default Java | 2020-01-14 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/19/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-client.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/19/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/19/artifact/yetus-jdk11-hadoop3-check/output/branch-javadoc-hbase-zookeeper.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/19/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/19/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-zookeeper.txt
 |
   | javadoc | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/19/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/19/testReport/
 |
   | Max. process+thread count | 4427 (vs. ulimit of 12500) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-zookeeper 
hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1774/19/console |
   | versions | git=2.17.1 maven=(cecedd343002696d0abb50b32b541b8a6ba2883f) |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] saintstack merged pull request #1955: HBASE-24616 Remove BoundedRecoveredHFilesOutputSink dependency on a T…

2020-06-25 Thread GitBox


saintstack merged pull request #1955:
URL: https://github.com/apache/hbase/pull/1955


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] saintstack commented on a change in pull request #1955: HBASE-24616 Remove BoundedRecoveredHFilesOutputSink dependency on a T…

2020-06-25 Thread GitBox


saintstack commented on a change in pull request #1955:
URL: https://github.com/apache/hbase/pull/1955#discussion_r445765597



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/wal/BoundedRecoveredHFilesOutputSink.java
##
@@ -191,50 +186,22 @@ public boolean keepRegionEvent(Entry entry) {
 return false;
   }
 
+  /**
+   * @return Returns a base HFile without compressions or encodings; good 
enough for recovery

Review comment:
   bq. Actually the timeout can be set per rpc request on the same 
connection.
   
   You are right. Could try this later.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-24633) Remove data locality and StoreFileCostFunction for replica regions out of balancer's cost calculation

2020-06-25 Thread Huaxiang Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-24633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17145729#comment-17145729
 ] 

Huaxiang Sun commented on HBASE-24633:
--

{code:java}
Data locality of replica regions in balancer has negative impact for cluster's 
"balanced" state. Hbase balancer's goal is to move regions so cluster can reach 
a "balanced" state (region#s/rs, data locality, ops etc). Each time it runs, it 
makes decision so the cluster goes closer to a "balanced" state. Some of the 
factors actually support this direction. For an example, primary region's data 
locality. If balancer decides region A needs to be moved to region server 1 for 
better data locality, over time, region A at region server 1's data locality 
will be improved (flush/compaction will increase data locality). The cluster 
will become more stable. However, today, data locality for replica region is 
also playing the same critical role as primary region, this factor actually 
moves in opposite direction. For an example, if replica region Ar is moved to 
region server 1 for better data locality, over time, the data locality for this 
Ar will get worse (as primary region does all compaction/flush, hdfs may not 
put data copy to the same data node as replica region resides). Some time 
later, balancer will need to move this Ar region again for better data 
locality. The solution I am proposing is to remove this factor from balancer's 
decision make, data locality for replica region is not a goal for balancer. If 
we need better latency for replica region read, we need extra mechanism to warm 
up the caches for replica regions.
{code}

> Remove data locality and StoreFileCostFunction for replica regions out of 
> balancer's cost calculation
> -
>
> Key: HBASE-24633
> URL: https://issues.apache.org/jira/browse/HBASE-24633
> Project: HBase
>  Issue Type: Improvement
>  Components: Balancer
>Affects Versions: 2.3.0
>Reporter: Huaxiang Sun
>Assignee: Huaxiang Sun
>Priority: Major
>
> We found one of the clusters with read replica enabled always balance lots of 
> replica regions. going through the balancer's cost functions, found that data 
> locality and StoreFileCost have same multiplier for both primary and replica 
> regions. That is something we can improve. Data locality for replica regions 
> should not be a dominant factor for balancer. We can either remove it out of 
> balancer's picture for now and give it a small multiplier.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] bitoffdev commented on a change in pull request #1959: HBASE-20819 Use TableDescriptor to replace HTableDescriptor in hbase-shell module

2020-06-25 Thread GitBox


bitoffdev commented on a change in pull request #1959:
URL: https://github.com/apache/hbase/pull/1959#discussion_r445722179



##
File path: hbase-shell/src/main/ruby/hbase/admin.rb
##
@@ -1359,26 +1366,26 @@ def list_locks
   @admin.getLocks
 end
 
-# Parse arguments and update HTableDescriptor accordingly
-def update_htd_from_arg(htd, arg)

Review comment:
   Thanks for the heads up! I just updated the "Release Note" field in Jira 
to state that this method was removed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #1974: Backport "HBASE-22504 Addendum: restore findCommonPrefix" to branch-2.3

2020-06-25 Thread GitBox


Apache-HBase commented on pull request #1974:
URL: https://github.com/apache/hbase/pull/1974#issuecomment-649743890


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m  0s |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m  6s |  https://github.com/apache/hbase/pull/1974 
does not apply to branch-2.3. Rebase required? Wrong Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hbase/pull/1974 |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-1974/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (HBASE-22504) Optimize the MultiByteBuff#get(ByteBuffer, offset, len)

2020-06-25 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-22504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-22504.
--
Resolution: Fixed

Pushed addendum to branch-2 and branch-2.3. The whole class is 
deprecated/IA.Private as of HBASE-22044, so no need for this to go to master.

> Optimize the MultiByteBuff#get(ByteBuffer, offset, len)
> ---
>
> Key: HBASE-22504
> URL: https://issues.apache.org/jira/browse/HBASE-22504
> Project: HBase
>  Issue Type: Sub-task
>  Components: BucketCache
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.3.0
>
> Attachments: HBASE-22504.HBASE-21879.v01.patch
>
>
> In HBASE-22483,  we saw that the BucketCacheWriter thread was quite busy 
> [^BucketCacheWriter-is-busy.png],  the flame graph also indicated that the 
> ByteBufferArray#internalTransfer cost ~6% CPU (see 
> [async-prof-pid-25042-cpu-1.svg|https://issues.apache.org/jira/secure/attachment/12970294/async-prof-pid-25042-cpu-1.svg]).
>   because we used the hbase.ipc.server.allocator.buffer.size=64KB, each 
> HFileBlock will be backend  by a MultiByteBuff: one 64KB offheap ByteBuffer 
> and one small heap ByteBuffer.   
> The path is depending on the MultiByteBuff#get(ByteBuffer, offset, len) now: 
> {code:java}
> RAMQueueEntry#writeToCache
> |--> ByteBufferIOEngine#write
> |--> ByteBufferArray#internalTransfer
> |--> ByteBufferArray$WRITER
> |--> MultiByteBuff#get(ByteBuffer, offset, len)
> {code}
> While the MultiByteBuff#get impl is simple and crude now, can optimze this 
> implementation:
> {code:java}
>   @Override
>   public void get(ByteBuffer out, int sourceOffset,
>   int length) {
> checkRefCount();
>   // Not used from real read path actually. So not going with
>   // optimization
> for (int i = 0; i < length; ++i) {
>   out.put(this.get(sourceOffset + i));
> }
>   }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   >