[GitHub] [hbase] clarax commented on a change in pull request #3724: HBASE-26311 Balancer gets stuck in cohosted replica distribution

2021-10-13 Thread GitBox


clarax commented on a change in pull request #3724:
URL: https://github.com/apache/hbase/pull/3724#discussion_r728646989



##
File path: 
hbase-balancer/src/main/java/org/apache/hadoop/hbase/master/balancer/DoubleArrayCost.java
##
@@ -66,17 +66,21 @@ void applyCostsChange(Consumer consumer) {
   }
 
   private static double computeCost(double[] stats) {
+if (stats == null || stats.length == 0) {
+  return 0;
+}
 double totalCost = 0;
 double total = getSum(stats);
 
 double count = stats.length;
 double mean = total / count;
-
 for (int i = 0; i < stats.length; i++) {
   double n = stats[i];
-  double diff = Math.abs(mean - n);
+  double diff = (mean - n) * (mean - n);
   totalCost += diff;
 }
+// No need to compute standard deviation with division by cluster size 
when scaling.
+totalCost = Math.sqrt(totalCost);

Review comment:
   Using the standard deviation instead of linear deviation to assign 
higher penalty on outliers and therefore unstuck balancer when even region 
count distribution cannot be achieved with other constraint such as rack/host 
constraints 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] clarax commented on a change in pull request #3724: HBASE-26311 Balancer gets stuck in cohosted replica distribution

2021-10-13 Thread GitBox


clarax commented on a change in pull request #3724:
URL: https://github.com/apache/hbase/pull/3724#discussion_r728646024



##
File path: 
hbase-balancer/src/main/java/org/apache/hadoop/hbase/master/balancer/BalancerClusterState.java
##
@@ -84,12 +84,9 @@
   int[] regionIndexToServerIndex; // regionIndex -> serverIndex
   int[] initialRegionIndexToServerIndex; // regionIndex -> serverIndex 
(initial cluster state)
   int[] regionIndexToTableIndex; // regionIndex -> tableIndex
-  int[][] numRegionsPerServerPerTable; // serverIndex -> tableIndex -> # 
regions

Review comment:
   Thanks to the refactoring of cached double cost array, I am doing some 
cleaning up and refactoring here for table skew cost function.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] clarax commented on a change in pull request #3723: HBASE-26309 Balancer tends to move regions to the server at the end o…

2021-10-13 Thread GitBox


clarax commented on a change in pull request #3723:
URL: https://github.com/apache/hbase/pull/3723#discussion_r728645078



##
File path: 
hbase-balancer/src/main/java/org/apache/hadoop/hbase/master/balancer/LoadCandidateGenerator.java
##
@@ -34,27 +35,51 @@ BalanceAction generate(BalancerClusterState cluster) {
   private int pickLeastLoadedServer(final BalancerClusterState cluster, int 
thisServer) {
 Integer[] servers = cluster.serverIndicesSortedByRegionCount;
 
-int index = 0;
-while (servers[index] == null || servers[index] == thisServer) {
-  index++;
-  if (index == servers.length) {
-return -1;
+int selectedIndex = -1;
+double currentLargestRandom = -1;
+for (int i = 0; i < servers.length; i++) {
+  if (servers[i] == null || servers[i] == thisServer) {
+continue;
+  }
+  if (selectedIndex != -1
+&& cluster.getNumRegionsComparator().compare(servers[i], 
servers[selectedIndex]) != 0) {
+// Exhausted servers of the same region count
+break;
+  }
+  // we don't know how many servers have the same region count, we will 
randomly select one

Review comment:
   The server are sorted by region counts. And on a large cluster, we often 
have many servers with same count.  instead of always picking the first one 
which may not work, we want to pick a random one from the candidate to unstuck 
balancer and generate more random candidates.. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] clarax commented on a change in pull request #3723: HBASE-26309 Balancer tends to move regions to the server at the end o…

2021-10-13 Thread GitBox


clarax commented on a change in pull request #3723:
URL: https://github.com/apache/hbase/pull/3723#discussion_r728644210



##
File path: 
hbase-balancer/src/main/java/org/apache/hadoop/hbase/master/balancer/LoadCandidateGenerator.java
##
@@ -34,27 +35,51 @@ BalanceAction generate(BalancerClusterState cluster) {
   private int pickLeastLoadedServer(final BalancerClusterState cluster, int 
thisServer) {
 Integer[] servers = cluster.serverIndicesSortedByRegionCount;
 
-int index = 0;
-while (servers[index] == null || servers[index] == thisServer) {
-  index++;
-  if (index == servers.length) {
-return -1;
+int selectedIndex = -1;
+double currentLargestRandom = -1;
+for (int i = 0; i < servers.length; i++) {
+  if (servers[i] == null || servers[i] == thisServer) {
+continue;
+  }
+  if (selectedIndex != -1
+&& cluster.getNumRegionsComparator().compare(servers[i], 
servers[selectedIndex]) != 0) {
+// Exhausted servers of the same region count
+break;
+  }
+  // we don't know how many servers have the same region count, we will 
randomly select one

Review comment:
   Not using reservoir sample class because we keep it as a large 
continuous int array. Too expensive to construct a sample pool and to box/unbox 
every time.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] clarax commented on pull request #3710: HBASE-26308 Sum of multiplier of cost functions is not populated properly when we have a shortcut for trigger

2021-10-13 Thread GitBox


clarax commented on pull request #3710:
URL: https://github.com/apache/hbase/pull/3710#issuecomment-942957768


   @Apache9 Any more concerned? I pushed the pre-commit check fixes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3746: HBASE-26348 Implement a special procedure to migrate rs group informa…

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3746:
URL: https://github.com/apache/hbase/pull/3746#issuecomment-942956485


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 24s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m  4s |  master passed  |
   | +1 :green_heart: |  compile  |   4m 37s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   6m  7s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 36s |  the patch passed  |
   | +1 :green_heart: |  cc  |   4m 36s |  the patch passed  |
   | +1 :green_heart: |  javac  |   4m 36s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  19m 52s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  hbaseprotoc  |   1m 40s |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   6m 28s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  63m 33s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3746/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3746 |
   | Optional Tests | dupname asflicense cc hbaseprotoc prototool javac 
spotbugs hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 891f0f54b43d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4d27c4726d |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-server U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3746/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942887933


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 48s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 12s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   0m 27s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | -0 :warning: |  javac  |   0m 26s |  
hbase-compression_hbase-compression-zstd generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0)  |
   | -0 :warning: |  checkstyle  |   0m 13s |  
hbase-compression/hbase-compression-zstd: The patch generated 3 new + 2 
unchanged - 0 fixed = 5 total (was 2)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  20m 18s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   0m 41s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 12s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  40m 19s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/4/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3748 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 3e331d211b0f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4d27c4726d |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/4/artifact/yetus-general-check/output/diff-compile-javac-hbase-compression_hbase-compression-zstd.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/4/artifact/yetus-general-check/output/diff-checkstyle-hbase-compression_hbase-compression-zstd.txt
 |
   | Max. process+thread count | 95 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/4/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942884897


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  2s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 18s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   9m  8s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 20s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m  6s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 46s |  hbase-compression-zstd in the 
patch passed.  |
   |  |   |  32m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3748 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux c0328d761d07 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4d27c4726d |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/4/testReport/
 |
   | Max. process+thread count | 262 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/4/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942883278


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 26s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  9s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 13s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   3m 53s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 20s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m  9s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 49s |  hbase-compression-zstd in the 
patch passed.  |
   |  |   |  28m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/4/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3748 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux e587a4ac0486 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 4d27c4726d |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/4/testReport/
 |
   | Max. process+thread count | 277 (vs. ulimit of 3) |
   | modules | C: hbase-compression/hbase-compression-zstd U: 
hbase-compression/hbase-compression-zstd |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/4/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


Apache9 commented on a change in pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#discussion_r728575130



##
File path: 
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
##
@@ -123,4 +141,59 @@ static int getBufferSize(Configuration conf) {
 return size > 0 ? size : 256 * 1024; // Don't change this default
   }
 
+  static byte[] getDictionary(final Configuration conf) {
+// Get the dictionary path, if set
+final String s = conf.get(ZSTD_DICTIONARY_KEY);
+if (s == null) {
+  return null;
+}
+
+// Create the dictionary loading cache if we haven't already
+if (DICTIONARY_CACHE == null) {
+  synchronized (ZstdCodec.class) {
+if (DICTIONARY_CACHE == null) {
+  DICTIONARY_CACHE = CacheBuilder.newBuilder()

Review comment:
   nits: better abstract the creation code to a separated method? It could 
make the code easier to read.

##
File path: 
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
##
@@ -123,4 +141,59 @@ static int getBufferSize(Configuration conf) {
 return size > 0 ? size : 256 * 1024; // Don't change this default
   }
 
+  static byte[] getDictionary(final Configuration conf) {
+// Get the dictionary path, if set
+final String s = conf.get(ZSTD_DICTIONARY_KEY);
+if (s == null) {
+  return null;
+}
+
+// Create the dictionary loading cache if we haven't already
+if (DICTIONARY_CACHE == null) {
+  synchronized (ZstdCodec.class) {
+if (DICTIONARY_CACHE == null) {
+  DICTIONARY_CACHE = CacheBuilder.newBuilder()
+.expireAfterAccess(1, TimeUnit.HOURS)
+.build(
+  new CacheLoader() {
+public byte[] load(String s) throws Exception {
+  final Path path = new Path(s);
+  final FileSystem fs = FileSystem.get(path.toUri(), conf);
+  final FileStatus stat = fs.getFileStatus(path);
+  if (!stat.isFile()) {
+throw new IllegalArgumentException(s + " is not a file");
+  }
+  final int limit = conf.getInt(ZSTD_DICTIONARY_MAX_SIZE_KEY,
+DEFAULT_ZSTD_DICTIONARY_MAX_SIZE);
+  if (stat.getLen() > limit) {
+throw new IllegalArgumentException("Dictionary " + s + " 
is too large" +
+  ", size=" + stat.getLen() + ", limit=" + limit);
+  }
+  final ByteArrayOutputStream baos = new 
ByteArrayOutputStream();
+  final byte[] buffer = new byte[8192];
+  try (final FSDataInputStream in = fs.open(path)) {
+int n;
+do {
+  n = in.read(buffer);
+  if (n > 0) {
+baos.write(buffer, 0, n);
+ }

Review comment:
   nits: indent




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3749: HBASE-26328 Clone snapshot doesn't load reference files into FILE SFT impl

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3749:
URL: https://github.com/apache/hbase/pull/3749#issuecomment-942850724


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  8s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-26067 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m  3s |  HBASE-26067 passed  |
   | +1 :green_heart: |  compile  |   1m 39s |  HBASE-26067 passed  |
   | +1 :green_heart: |  shadedjars  |  10m 30s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  HBASE-26067 passed  |
   | -0 :warning: |  patch  |  11m 36s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 44s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 38s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 38s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |  10m 34s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 52s |  hbase-server generated 1 new + 86 
unchanged - 0 fixed = 87 total (was 86)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 255m 50s |  hbase-server in the patch failed.  |
   |  |   | 297m  2s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3749/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3749 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux eb4b82e0a4d2 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-26067 / 9c9789bbd2 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | javadoc | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3749/1/artifact/yetus-jdk11-hadoop3-check/output/diff-javadoc-javadoc-hbase-server.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3749/1/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3749/1/testReport/
 |
   | Max. process+thread count | 3011 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3749/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3749: HBASE-26328 Clone snapshot doesn't load reference files into FILE SFT impl

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3749:
URL: https://github.com/apache/hbase/pull/3749#issuecomment-942836445


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 12s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-26067 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 41s |  HBASE-26067 passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  HBASE-26067 passed  |
   | +1 :green_heart: |  shadedjars  |   9m 26s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  HBASE-26067 passed  |
   | -0 :warning: |  patch  |  10m 20s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 24s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 11s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 28s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 40s |  hbase-server generated 2 new + 21 
unchanged - 0 fixed = 23 total (was 21)  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 242m 46s |  hbase-server in the patch failed.  |
   |  |   | 277m 43s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3749/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3749 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux e9277cd5c5f5 4.15.0-142-generic #146-Ubuntu SMP Tue Apr 13 
01:11:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-26067 / 9c9789bbd2 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javadoc | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3749/1/artifact/yetus-jdk8-hadoop3-check/output/diff-javadoc-javadoc-hbase-server.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3749/1/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-server.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3749/1/testReport/
 |
   | Max. process+thread count | 4362 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3749/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26328) Clone snapshot doesn't load reference files into FILE SFT impl

2021-10-13 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-26328:
-
Status: Patch Available  (was: In Progress)

> Clone snapshot doesn't load reference files into FILE SFT impl
> --
>
> Key: HBASE-26328
> URL: https://issues.apache.org/jira/browse/HBASE-26328
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, snapshots
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
>
> After cloning a snapshot from a FILE SFT enabled table, noticed that none of 
> the cloned table files were added into the FILE SFT meta files, in fact, FILE 
> SFT meta dir didn't even got created. Scanning this cloned table gives no 
> results, as none of the files are tracked. I believe we need to call 
> StoreFileTracker.add during the snapshot cloning.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3751: HBASE-26271: Cleanup the broken store files under data directory

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3751:
URL: https://github.com/apache/hbase/pull/3751#issuecomment-942732944


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  0s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-26067 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 58s |  HBASE-26067 passed  |
   | +1 :green_heart: |  compile  |   3m  4s |  HBASE-26067 passed  |
   | +1 :green_heart: |  shadedjars  |   8m  8s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  HBASE-26067 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 57s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 57s |  the patch passed  |
   | -1 :x: |  shadedjars  |   3m 48s |  patch has 10 errors when building our 
shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 24s |  hbase-client generated 1 new + 3 
unchanged - 0 fixed = 4 total (was 3)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 49s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 36s |  hbase-hadoop-compat in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 15s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 157m 32s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  unit  |   3m 49s |  hbase-rest in the patch passed.  |
   |  |   | 198m 24s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3751 |
   | JIRA Issue | HBASE-26271 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 7b9f4013b826 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-26067 / 9c9789bbd2 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | shadedjars | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-jdk8-hadoop3-check/output/patch-shadedjars.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-jdk8-hadoop3-check/output/diff-javadoc-javadoc-hbase-client.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/testReport/
 |
   | Max. process+thread count | 4573 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-hadoop-compat hbase-client 
hbase-server hbase-rest U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell commented on a change in pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#discussion_r728433970



##
File path: 
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
##
@@ -123,4 +137,42 @@ static int getBufferSize(Configuration conf) {
 return size > 0 ? size : 256 * 1024; // Don't change this default
   }
 
+  static LoadingCache CACHE = CacheBuilder.newBuilder()

Review comment:
   This is definitely a concern. 
   
   In the latest version of the patch I override hashCode in 
CompoundConfiguration so we are doing something better than object identity 
when caching the dictionaries for the store writer case. It is kind of 
expensive to compute the hashCode given how CompoundConfiguration works but at 
least we do not do it that often, and not in performance critical code. Once a 
compressor or decompressor is created it is reused for the lifetime of the 
reader or writer. Otherwise we are using object identity. That is not the worst 
thing, at least. The cache is capped at 100 and will also expire entries if 
they are not used for one hour. 
   
   Let me try your suggestion. I was thinking we could avoid doing two lookups 
into the Configuration -- to get the boolean, and then the path, for the key -- 
but that hashCode calculation is pretty expensive. Getting the path from the 
configuration object and using that would be less.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell commented on a change in pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#discussion_r728433970



##
File path: 
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
##
@@ -123,4 +137,42 @@ static int getBufferSize(Configuration conf) {
 return size > 0 ? size : 256 * 1024; // Don't change this default
   }
 
+  static LoadingCache CACHE = CacheBuilder.newBuilder()

Review comment:
   This is definitely a concern. 
   
   In the latest version of the patch I override hashCode in 
CompoundConfiguration so we are doing something better than object identity 
when caching the dictionaries for the store writer case. It is kind of 
expensive to compute the hashCode given how CompoundConfiguration works but at 
least we do not do it that often, and not in performance critical code. Once a 
compressor or decompressor is created it is reused for the lifetime of the 
reader or writer. 
   
   Otherwise we are using object identity. That is not the worst thing, at 
least. The cache is capped at 100 and will also expire entries if they are not 
used for one hour. (And those parameters can be adjusted to your taste.)
   
   Let me try your suggestion. I was thinking we could avoid doing two lookups 
into the Configuration -- to get the boolean, and then the path, for the key -- 
but that hashCode calculation is pretty expensive. Getting the path from the 
configuration object and using that would be less.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell commented on a change in pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#discussion_r728433970



##
File path: 
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
##
@@ -123,4 +137,42 @@ static int getBufferSize(Configuration conf) {
 return size > 0 ? size : 256 * 1024; // Don't change this default
   }
 
+  static LoadingCache CACHE = CacheBuilder.newBuilder()

Review comment:
   This is definitely a concern. I the latest version of the patch I 
override hashCode in CompoundConfiguration so we are doing something better 
than object identity when caching the dictionaries for the store writer case. 
It is kind of expensive to compute the hashCode given how CompoundConfiguration 
works but at least we do not do it that often, and not in performance critical 
code. Once a compressor or decompressor is created it is reused for the 
lifetime of the reader or writer. 
   
   Otherwise we are using object identity. That is not the worst thing, at 
least. The cache is capped at 100 and will also expire entries if they are not 
used for one hour. (And those parameters can be adjusted to your taste.)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell commented on a change in pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#discussion_r728433970



##
File path: 
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
##
@@ -123,4 +137,42 @@ static int getBufferSize(Configuration conf) {
 return size > 0 ? size : 256 * 1024; // Don't change this default
   }
 
+  static LoadingCache CACHE = CacheBuilder.newBuilder()

Review comment:
   This is definitely a concern. I the latest version of the patch I 
override hashCode in CompoundConfiguration so we are doing something better 
than object identity when caching the dictionaries for the store writer case. 
It is kind of expensive to compute the hashCode given how CompoundConfiguration 
works but at least we do not do it that often, and not in performance critical 
code.
   
   Otherwise we are using object identity. That is not the worst thing, at 
least. The cache is capped at 100 and will also expire entries if they are not 
used for one hour. (And those parameters can be adjusted to your taste.)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell commented on a change in pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#discussion_r728434301



##
File path: 
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
##
@@ -123,4 +137,42 @@ static int getBufferSize(Configuration conf) {
 return size > 0 ? size : 256 * 1024; // Don't change this default
   }
 
+  static LoadingCache CACHE = CacheBuilder.newBuilder()
+.maximumSize(100)
+.expireAfterAccess(1, TimeUnit.HOURS)
+.build(
+  new CacheLoader() {
+public byte[] load(Configuration conf) throws Exception {
+  final String s = conf.get(ZSTD_DICTIONARY_FILE_KEY);
+  if (s == null) {
+throw new IllegalArgumentException(ZSTD_DICTIONARY_FILE_KEY + " is 
not set");
+  }
+  final Path p = new Path(s);
+  final ByteArrayOutputStream baos = new ByteArrayOutputStream();
+  final byte[] buffer = new byte[8192];
+  try (final FSDataInputStream in = FileSystem.get(p.toUri(), 
conf).open(p)) {

Review comment:
   Yes. If there is a size limit and it is exceeded the codec load should 
be rejected by throwing a RuntimeException probably. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3749: HBASE-26328 Clone snapshot doesn't load reference files into FILE SFT impl

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3749:
URL: https://github.com/apache/hbase/pull/3749#issuecomment-942721122


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  0s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-26067 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 29s |  HBASE-26067 passed  |
   | +1 :green_heart: |  compile  |   3m 23s |  HBASE-26067 passed  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  HBASE-26067 passed  |
   | +1 :green_heart: |  spotbugs  |   2m 16s |  HBASE-26067 passed  |
   | -0 :warning: |  patch  |   2m 24s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  5s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 24s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 24s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   1m 10s |  hbase-server: The patch 
generated 33 new + 25 unchanged - 0 fixed = 58 total (was 25)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  20m 25s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   2m 22s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 13s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  52m 25s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3749/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3749 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 9af362968b90 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-26067 / 9c9789bbd2 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3749/1/artifact/yetus-general-check/output/diff-checkstyle-hbase-server.txt
 |
   | Max. process+thread count | 86 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3749/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell edited a comment on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell edited a comment on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942711578


   > Haven't read the code yet, but is it possible to copy the dict into the 
hbase storage so it is controlled by us?
   
   @Apache9 I was thinking about writing the dictionary used to compress values 
in an HFile or WAL into the HFile or WAL in the metadata section, but there 
would need to be format extensions to the WAL (perhaps just an extra field in 
the header and/or trailer PB). Hopefully there can be some re-use of meta 
blocks for HFiles. But this raises questions. There should be some way for a 
codec to read and write metadata into the container of the thing they are 
processing, but we don't have API support for that. I would consider it future 
work, but definitely of interest. The interest is ensuring that HFiles have all 
of the information they need to read themselves added at write time. 
   
   Otherwise I think the current scheme is ok. The operator is already in 
charge of their table schema and compression codec dependencies (like 
deployment of native link libraries). This is an incremental responsibility... 
if you put a compression dictionary attribute into your schema, don't lose the 
dictionary. 
   
   Mostly it is already true that HFiles carry all of the information within 
their trailer or meta blocks a reader requires to process them. I can think of 
one exception, that being encryption, where the data encryption key (DEK) is 
stored in the HFile, but the master encryption key (MEK) used to encrypt the 
DEK is by design kept in a trust store or HSM and if the MEK is lost all data 
is not decryptable. There are some parallels between external MEK data and 
external compression dictionary data. One could claim the same general rules 
for managing them apply. The difference is the dictionary is not sensitive and 
can be copied into the file, whereas the master encryption key must be 
carefully guarded and not written colocated with data.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell edited a comment on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell edited a comment on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942711578


   > Haven't read the code yet, but is it possible to copy the dict into the 
hbase storage so it is controlled by us?
   
   @Apache9 I was thinking about writing the dictionary used to compress values 
in an HFile or WAL into the HFile or WAL in the metadata section, but there 
would need to be format extensions to the WAL (perhaps just an extra field in 
the header and/or trailer PB). Hopefully there can be some re-use of meta 
blocks for HFiles. But this raises questions. There should be some way for a 
codec to read and write metadata into the container of the thing they are 
processing, but we don't have API support for that. I would consider it future 
work, but definitely of interest. The interest is ensuring that HFiles have all 
of the information they need to read themselves added at write time. 
   
   Otherwise I think the current scheme is ok. The operator is already in 
charge of their table schema and compression codec dependencies (like 
deployment of native link libraries). This is an incremental responsibility... 
if you put a compression dictionary attribute into your schema, don't lose the 
dictionary. 
   
   Mostly it is already true that HFiles carry all of the information within 
their trailer or meta blocks a reader requires to process them. I can think of 
one exception, that being encryption, where the data encryption key (DEK) is 
stored in the HFile, but the master encryption key is by design kept in a trust 
store or HSM and if the master key is lost all data is not decryptable. There 
are some parallels between external key data and external compression 
dictionary data. One could claim the same general rules for managing them 
apply. The difference is the dictionary is not sensitive and can be copied into 
the file, whereas the master encryption key must be carefully guarded and not 
written colocated with data.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell edited a comment on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell edited a comment on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942711578


   > Haven't read the code yet, but is it possible to copy the dict into the 
hbase storage so it is controlled by us?
   
   @Apache9 I was thinking about writing the dictionary used to compress values 
in an HFile or WAL into the HFile or WAL in the metadata section, but there 
would need to be format extensions to the WAL (perhaps just an extra field in 
the header and/or trailer PB). Hopefully there can be some re-use of meta 
blocks for HFiles. But this raises questions. There should be some way for a 
codec to read and write metadata, but we don't have API support for that. I 
would consider it future work, but definitely of interest. The interest is 
ensuring that HFiles have all of the information they need to read themselves 
added at write time. 
   
   Otherwise I think the current scheme is ok. The operator is already in 
charge of their table schema and compression codec dependencies (like 
deployment of native link libraries). This is an incremental responsibility... 
if you put a compression dictionary attribute into your schema, don't lose the 
dictionary. 
   
   Mostly it is already true that HFiles carry all of the information within 
their trailer or meta blocks a reader requires to process them. I can think of 
one exception, that being encryption, where the data encryption key (DEK) is 
stored in the HFile, but the master encryption key is by design kept in a trust 
store or HSM and if the master key is lost all data is not decryptable. There 
are some parallels between external key data and external compression 
dictionary data. One could claim the same general rules for managing them 
apply. The difference is the dictionary is not sensitive and can be copied into 
the file, whereas the master encryption key must be carefully guarded and not 
written colocated with data.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell edited a comment on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell edited a comment on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942711578


   > Haven't read the code yet, but is it possible to copy the dict into the 
hbase storage so it is controlled by us?
   
   @Apache9 I was thinking about writing the dictionary used to compress values 
in an HFile or WAL into the HFile or WAL in the metadata section, but there 
would need to be format extensions to the WAL (perhaps just an extra field in 
the trailer PB). Hopefully there can be some re-use of meta blocks for HFiles. 
But this raises questions. There should be some way for a codec to read and 
write metadata, but we don't have API support for that. I would consider it 
future work, but definitely of interest. The interest is ensuring that HFiles 
have all of the information they need to read themselves added at write time. 
   
   Otherwise I think the current scheme is ok. The operator is already in 
charge of their table schema and compression codec dependencies (like 
deployment of native link libraries). This is an incremental responsibility... 
if you put a compression dictionary attribute into your schema, don't lose the 
dictionary. 
   
   Mostly it is already true that HFiles carry all of the information within 
their trailer or meta blocks a reader requires to process them. I can think of 
one exception, that being encryption, where the data encryption key (DEK) is 
stored in the HFile, but the master encryption key is by design kept in a trust 
store or HSM and if the master key is lost all data is not decryptable. There 
are some parallels between external key data and external compression 
dictionary data. One could claim the same general rules for managing them 
apply. The difference is the dictionary is not sensitive and can be copied into 
the file, whereas the master encryption key must be carefully guarded and not 
written colocated with data.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell edited a comment on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell edited a comment on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942711578


   > Haven't read the code yet, but is it possible to copy the dict into the 
hbase storage so it is controlled by us?
   
   @Apache9 I was thinking about writing the dictionary used to compress values 
in an HFile or WAL into the HFile or WAL in the metadata section, but there 
would need to be format extensions to the WAL. Hopefully there can be some 
re-use of meta blocks for HFiles. But this raises questions. There should be 
some way for a codec to read and write metadata, but we don't have API support 
for that. I would consider it future work, but definitely of interest. The 
interest is ensuring that HFiles have all of the information they need to read 
themselves added at write time. 
   
   Otherwise I think the current scheme is ok. The operator is already in 
charge of their table schema and compression codec dependencies (like 
deployment of native link libraries). This is an incremental responsibility... 
if you put a compression dictionary attribute into your schema, don't lose the 
dictionary. 
   
   Mostly it is already true that HFiles carry all of the information within 
their trailer or meta blocks a reader requires to process them. I can think of 
one exception, that being encryption, where the data encryption key (DEK) is 
stored in the HFile, but the master encryption key is by design kept in a trust 
store or HSM and if the master key is lost all data is not decryptable. There 
are some parallels between external key data and external compression 
dictionary data. One could claim the same general rules for managing them 
apply. The difference is the dictionary is not sensitive and can be copied into 
the file, whereas the master encryption key must be carefully guarded and not 
written colocated with data.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell edited a comment on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell edited a comment on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942711578


   > Haven't read the code yet, but is it possible to copy the dict into the 
hbase storage so it is controlled by us?
   
   @Apache9 I was thinking about writing the dictionary used to compress values 
in an HFile or WAL into the HFile or WAL in the metadata section, but there 
would need to be format extensions to the WAL. Hopefully there can be some 
re-use of meta blocks for HFiles. But this raises questions. There should be 
some way for a codec to read and write metadata, but we don't have API support 
for that. I would consider it future work, but definitely of interest.
   
   Otherwise I think the current scheme is ok. The operator is already in 
charge of their table schema and compression codec dependencies (like 
deployment of native link libraries). This is an incremental responsibility... 
if you put a compression dictionary attribute into your schema, don't lose the 
dictionary. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell commented on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942711578


   > Haven't read the code yet, but is it possible to copy the dict into the 
hbase storage so it is controlled by us?
   
   @Apache9 I was thinking about writing the dictionary used to compress values 
in an HFile or WAL into the HFile or WAL in the metadata section, but there 
would need to be format extensions to the WAL. Hopefully there can be some 
re-use of meta blocks for HFiles. But this raises questions. There should be 
some way for a codec to read and write metadata, but we don't have API support 
for that. I would consider it future work, but definitely of interest.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26353) Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread Andrew Kyle Purtell (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17428482#comment-17428482
 ] 

Andrew Kyle Purtell commented on HBASE-26353:
-

Let me re-run the [performance evaluation from 
HBASE-26259|https://issues.apache.org/jira/browse/HBASE-26259?focusedCommentId=17422934&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17422934]
 but with synthetic small value data and compare speed and efficiency with 
precomputed dictionary vs without. Gains are expected but I'd like to present 
some hard comparison data here.

> Support loadable dictionaries in hbase-compression-zstd
> ---
>
> Key: HBASE-26353
> URL: https://issues.apache.org/jira/browse/HBASE-26353
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Kyle Purtell
>Assignee: Andrew Kyle Purtell
>Priority: Minor
> Fix For: 2.5.0, 3.0.0-alpha-2
>
>
> ZStandard supports initialization of compressors and decompressors with a 
> precomputed dictionary, which can dramatically improve and speed up 
> compression of tables with small values. For more details, please see [The 
> Case For Small Data 
> Compression|https://github.com/facebook/zstd#the-case-for-small-data-compression].
>  
> If a table is going to have a lot of small values and the user can put 
> together a representative set of files that can be used to train a dictionary 
> for compressing those values, a dictionary can be trained with the {{zstd}} 
> command line utility, available in any zstandard package for your favorite OS:
> Training:
> {noformat}
> $ zstd --maxdict=1126400 --train-fastcover=shrink \
> -o mytable.dict training_files/*
> Trying 82 different sets of parameters
> ...
> k=674  
> d=8
> f=20
> steps=40
> split=75
> accel=1
> Save dictionary of size 1126400 into file mytable.dict
> {noformat}
> Deploy the dictionary file to HDFS or S3, etc.
> Create the table:
> {noformat}
> hbase> create "mytable", 
>   ... ,
>   CONFIGURATION => {
> 'hbase.io.compress.zstd.level' => '6',
> 'hbase.io.compress.zstd.dictionary' => true,
> 'hbase.io.compress.zstd.dictonary.file' =>  
> 'hdfs://nn/zdicts/mytable.dict'
>   }
> {noformat}
> Now start storing data. Compression results even for small values will be 
> excellent.
> Note: Beware, if the dictionary is lost, the data will not be decompressable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] apurtell commented on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell commented on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942709235


   Let me re-run the [performance evaluation from HBASE-26259 
](https://issues.apache.org/jira/browse/HBASE-26259?focusedCommentId=17422934&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17422934)
 but with synthetic small value data and compare speed and efficiency with 
precomputed dictionary vs without. Gains are expected but I'd like to present 
some hard comparison data here.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell commented on a change in pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#discussion_r728434301



##
File path: 
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
##
@@ -123,4 +137,42 @@ static int getBufferSize(Configuration conf) {
 return size > 0 ? size : 256 * 1024; // Don't change this default
   }
 
+  static LoadingCache CACHE = CacheBuilder.newBuilder()
+.maximumSize(100)
+.expireAfterAccess(1, TimeUnit.HOURS)
+.build(
+  new CacheLoader() {
+public byte[] load(Configuration conf) throws Exception {
+  final String s = conf.get(ZSTD_DICTIONARY_FILE_KEY);
+  if (s == null) {
+throw new IllegalArgumentException(ZSTD_DICTIONARY_FILE_KEY + " is 
not set");
+  }
+  final Path p = new Path(s);
+  final ByteArrayOutputStream baos = new ByteArrayOutputStream();
+  final byte[] buffer = new byte[8192];
+  try (final FSDataInputStream in = FileSystem.get(p.toUri(), 
conf).open(p)) {

Review comment:
   Yes




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell commented on a change in pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#discussion_r728433970



##
File path: 
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
##
@@ -123,4 +137,42 @@ static int getBufferSize(Configuration conf) {
 return size > 0 ? size : 256 * 1024; // Don't change this default
   }
 
+  static LoadingCache CACHE = CacheBuilder.newBuilder()

Review comment:
   This is definitely a concern. I the latest version of the patch I 
override hashCode in CompoundConfiguration so we are doing something better 
than object identity when caching the dictionaries for the store writer case. 
   
   Otherwise we are using object identity. That is not the worst thing, at 
least. The cache is capped at 100 and will also expire entries if they are not 
used for one hour. (And those parameters can be adjusted to your taste.)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] apurtell commented on a change in pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


apurtell commented on a change in pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#discussion_r728433970



##
File path: 
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
##
@@ -123,4 +137,42 @@ static int getBufferSize(Configuration conf) {
 return size > 0 ? size : 256 * 1024; // Don't change this default
   }
 
+  static LoadingCache CACHE = CacheBuilder.newBuilder()

Review comment:
   This is definitely a concern. I the latest version of the patch I 
override hashCode in CompoundConfiguration so we are doing something better 
than object identity when caching the dictionaries. 
   
   Otherwise we are using object identity. That is not the worst thing, at 
least. The cache is capped at 100 and will also expire entries if they are not 
used for one hour. (And those parameters can be adjusted to your taste.)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #3749: HBASE-26328 Clone snapshot doesn't load reference files into FILE SFT impl

2021-10-13 Thread GitBox


wchevreuil commented on a change in pull request #3749:
URL: https://github.com/apache/hbase/pull/3749#discussion_r728424375



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/CloneSnapshotProcedure.java
##
@@ -453,56 +453,24 @@ private void postCloneSnapshot(final MasterProcedureEnv 
env)
 List newRegions,
 final CreateHdfsRegions hdfsRegionHandler) throws IOException {
 final MasterFileSystem mfs = env.getMasterServices().getMasterFileSystem();
-final Path tempdir = mfs.getTempDir();

Review comment:
   Snapshots recover is another feature that was relying on temdirs & 
renames. I took the decision to allow restored dirs being created on the final 
path already, my understanding is that tables being restored/cloned will not be 
enabled, and if the snapshot fails at this stage, it will not get to the meta 
updates stages, meaning there will be no inconsistencies. There would be the 
need to identify and cleanout leftovers of failed snapshot recoveries. Any 
thoughts/suggestions?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #3749: HBASE-26328 Clone snapshot doesn't load reference files into FILE SFT impl

2021-10-13 Thread GitBox


wchevreuil commented on a change in pull request #3749:
URL: https://github.com/apache/hbase/pull/3749#discussion_r728418723



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/storefiletracker/SnapshotStoreFileTracker.java
##
@@ -0,0 +1,77 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.storefiletracker;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.regionserver.StoreContext;
+import org.apache.hadoop.hbase.regionserver.StoreFileInfo;
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Extends MigrationStoreFileTracker for Snapshot restore/clone specific case.
+ * When restoring/cloning snapshots, new regions are created with reference 
files to the
+ * original regions files. This work is done in snapshot specific classes. We 
need to somehow
+ * initialize these reference files in the configured StoreFileTracker. Once 
snapshot logic has
+ * cloned the store dir and created the references, it should set the list of 
reference files in
+ * SourceTracker.setReferenceFiles then invoke load 
method.
+ * 
+ */
+@InterfaceAudience.Private
+public class SnapshotStoreFileTracker extends MigrationStoreFileTracker {
+
+  protected SnapshotStoreFileTracker(Configuration conf, boolean 
isPrimaryReplica,
+StoreContext ctx) {
+super(conf, isPrimaryReplica, ctx);
+Preconditions.checkArgument(src instanceof SourceTracker,

Review comment:
   This is to ensure this will never be used as a general purpose 
"migration" tracker. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #3749: HBASE-26328 Clone snapshot doesn't load reference files into FILE SFT impl

2021-10-13 Thread GitBox


wchevreuil commented on a change in pull request #3749:
URL: https://github.com/apache/hbase/pull/3749#discussion_r728418138



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/storefiletracker/SnapshotStoreFileTracker.java
##
@@ -0,0 +1,77 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.storefiletracker;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.regionserver.StoreContext;
+import org.apache.hadoop.hbase.regionserver.StoreFileInfo;
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Extends MigrationStoreFileTracker for Snapshot restore/clone specific case.
+ * When restoring/cloning snapshots, new regions are created with reference 
files to the
+ * original regions files. This work is done in snapshot specific classes. We 
need to somehow
+ * initialize these reference files in the configured StoreFileTracker. Once 
snapshot logic has
+ * cloned the store dir and created the references, it should set the list of 
reference files in
+ * SourceTracker.setReferenceFiles then invoke load 
method.
+ * 
+ */
+@InterfaceAudience.Private
+public class SnapshotStoreFileTracker extends MigrationStoreFileTracker {

Review comment:
   Created this as an extension of MigrationStoreFileTracker for the 
snapshot recovery case only. Currently, MigrationStoreFileTracker is package 
private, didn't want to expose it just for the sake of snasphots clone/restore.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #3749: HBASE-26328 Clone snapshot doesn't load reference files into FILE SFT impl

2021-10-13 Thread GitBox


wchevreuil commented on a change in pull request #3749:
URL: https://github.com/apache/hbase/pull/3749#discussion_r728416493



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/storefiletracker/SnapshotStoreFileTracker.java
##
@@ -0,0 +1,77 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.regionserver.storefiletracker;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.regionserver.StoreContext;
+import org.apache.hadoop.hbase.regionserver.StoreFileInfo;
+import org.apache.hbase.thirdparty.com.google.common.base.Preconditions;
+import org.apache.yetus.audience.InterfaceAudience;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * Extends MigrationStoreFileTracker for Snapshot restore/clone specific case.
+ * When restoring/cloning snapshots, new regions are created with reference 
files to the
+ * original regions files. This work is done in snapshot specific classes. We 
need to somehow
+ * initialize these reference files in the configured StoreFileTracker. Once 
snapshot logic has
+ * cloned the store dir and created the references, it should set the list of 
reference files in
+ * SourceTracker.setReferenceFiles then invoke load 
method.
+ * 
+ */
+@InterfaceAudience.Private
+public class SnapshotStoreFileTracker extends MigrationStoreFileTracker {
+
+  protected SnapshotStoreFileTracker(Configuration conf, boolean 
isPrimaryReplica,
+StoreContext ctx) {
+super(conf, isPrimaryReplica, ctx);
+Preconditions.checkArgument(src instanceof SourceTracker,
+  "src for SnapshotStoreFileTracker should always be a SourceTracker!");
+  }
+
+  public SourceTracker getSourceTracker(){
+return (SourceTracker)this.src;
+  }
+
+  /**
+   * The SFT impl to be set as source for SnapshotStoreFileTracker.
+   */
+  public static class SourceTracker extends DefaultStoreFileTracker {
+
+private List files;
+
+public SourceTracker(Configuration conf, boolean isPrimaryReplica, 
StoreContext ctx) {
+  super(conf, isPrimaryReplica, ctx);
+}
+
+public void setReferenceFiles(List files) {

Review comment:
   This is called by RestoreSnapshotHelper, once it has built a list of 
reference files on the snapshotted table store dir. 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3751: HBASE-26271: Cleanup the broken store files under data directory

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3751:
URL: https://github.com/apache/hbase/pull/3751#issuecomment-942650220


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 21s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ HBASE-26067 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 41s |  HBASE-26067 passed  |
   | +1 :green_heart: |  compile  |   7m 14s |  HBASE-26067 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 21s |  HBASE-26067 passed  |
   | +1 :green_heart: |  spotbugs  |   9m 26s |  HBASE-26067 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 12s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 26s |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 46s |  the patch passed  |
   | +1 :green_heart: |  cc  |   7m 46s |  the patch passed  |
   | -0 :warning: |  javac  |   0m 32s |  hbase-hadoop-compat generated 6 new + 
97 unchanged - 6 fixed = 103 total (was 103)  |
   | -0 :warning: |  javac  |   1m  4s |  hbase-client generated 1 new + 121 
unchanged - 1 fixed = 122 total (was 122)  |
   | -0 :warning: |  checkstyle  |   0m 16s |  hbase-hadoop-compat: The patch 
generated 14 new + 0 unchanged - 0 fixed = 14 total (was 0)  |
   | -0 :warning: |  checkstyle  |   0m 35s |  hbase-client: The patch 
generated 5 new + 181 unchanged - 0 fixed = 186 total (was 181)  |
   | -0 :warning: |  checkstyle  |   1m 19s |  hbase-server: The patch 
generated 27 new + 162 unchanged - 0 fixed = 189 total (was 162)  |
   | -0 :warning: |  checkstyle  |   0m 17s |  hbase-rest: The patch generated 
3 new + 3 unchanged - 0 fixed = 6 total (was 3)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  21m 11s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  hbaseprotoc  |   2m 48s |  the patch passed  |
   | -1 :x: |  spotbugs  |   2m 28s |  hbase-server generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0)  |
   | -1 :x: |  spotbugs  |   0m 54s |  hbase-rest generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | -1 :x: |  asflicense  |   0m 53s |  The patch generated 1 ASF License 
warnings.  |
   |  |   |  84m 52s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hbase-server |
   |  |  Dead store to fsStoreFiles in 
org.apache.hadoop.hbase.regionserver.FileBasedStoreFileCleaner.chore()  At 
FileBasedStoreFileCleaner.java:org.apache.hadoop.hbase.regionserver.FileBasedStoreFileCleaner.chore()
  At FileBasedStoreFileCleaner.java:[line 94] |
   |  |  Random object created and used only once in 
org.apache.hadoop.hbase.regionserver.HRegionServer.initializeThreads()  At 
HRegionServer.java:only once in 
org.apache.hadoop.hbase.regionserver.HRegionServer.initializeThreads()  At 
HRegionServer.java:[line 1990] |
   | FindBugs | module:hbase-rest |
   |  |  Class 
org.apache.hadoop.hbase.rest.model.FileBasedStoreFileCleanerStatusModel defines 
non-transient non-serializable instance field fileBasedFileStoreCleanerStatus  
In FileBasedStoreFileCleanerStatusModel.java:instance field 
fileBasedFileStoreCleanerStatus  In FileBasedStoreFileCleanerStatusModel.java |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3751 |
   | JIRA Issue | HBASE-26271 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile cc hbaseprotoc prototool |
   | uname | Linux 31ea4f640230 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-26067 / 9c9789bbd2 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-general-check/output/diff-compile-javac-hbase-hadoop-compat.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-general-check/output/diff-compile-javac-hbase-client.txt
 |
   | checkstyle |

[jira] [Resolved] (HBASE-26359) Loosen Dockerfile pinned package versions for `create-release/mac-sshd-gpg-agent/Dockerfile`

2021-10-13 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk resolved HBASE-26359.
--
Resolution: Fixed

> Loosen Dockerfile pinned package versions for 
> `create-release/mac-sshd-gpg-agent/Dockerfile`
> 
>
> Key: HBASE-26359
> URL: https://issues.apache.org/jira/browse/HBASE-26359
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-2
>
>
> We need to apply a similar fix as was done for other dockerfiles in 
> HBASE-24631.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942639331


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 29s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m  2s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   1m 15s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 14s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 50s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  the patch passed  |
   | -0 :warning: |  javac  |   0m 27s |  
hbase-compression_hbase-compression-zstd generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0)  |
   | -0 :warning: |  checkstyle  |   0m 25s |  hbase-common: The patch 
generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5)  |
   | -0 :warning: |  checkstyle  |   0m 12s |  
hbase-compression/hbase-compression-zstd: The patch generated 1 new + 2 
unchanged - 0 fixed = 3 total (was 2)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  19m 22s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   1m 33s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  44m  0s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3748 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 6422a3dc0aa9 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / ede4d2715d |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/3/artifact/yetus-general-check/output/diff-compile-javac-hbase-compression_hbase-compression-zstd.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-common.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/3/artifact/yetus-general-check/output/diff-checkstyle-hbase-compression_hbase-compression-zstd.txt
 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-compression/hbase-compression-zstd U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/3/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HBASE-26359) Loosen Dockerfile pinned package versions for `create-release/mac-sshd-gpg-agent/Dockerfile`

2021-10-13 Thread Nick Dimiduk (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-26359:
-
Fix Version/s: 3.0.0-alpha-2

> Loosen Dockerfile pinned package versions for 
> `create-release/mac-sshd-gpg-agent/Dockerfile`
> 
>
> Key: HBASE-26359
> URL: https://issues.apache.org/jira/browse/HBASE-26359
> Project: HBase
>  Issue Type: Task
>  Components: build, community
>Reporter: Nick Dimiduk
>Assignee: Nick Dimiduk
>Priority: Major
> Fix For: 3.0.0-alpha-2
>
>
> We need to apply a similar fix as was done for other dockerfiles in 
> HBASE-24631.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] ndimiduk commented on pull request #3750: HBASE-26359 Loosen Dockerfile pinned package versions for `create-release/mac-sshd-gpg-agent/Dockerfile`

2021-10-13 Thread GitBox


ndimiduk commented on pull request #3750:
URL: https://github.com/apache/hbase/pull/3750#issuecomment-942637948


   Thanks @busbey !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk merged pull request #3750: HBASE-26359 Loosen Dockerfile pinned package versions for `create-release/mac-sshd-gpg-agent/Dockerfile`

2021-10-13 Thread GitBox


ndimiduk merged pull request #3750:
URL: https://github.com/apache/hbase/pull/3750


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3750: HBASE-26359 Loosen Dockerfile pinned package versions for `create-release/mac-sshd-gpg-agent/Dockerfile`

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3750:
URL: https://github.com/apache/hbase/pull/3750#issuecomment-942628837


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  7s |  Maven dependency ordering for branch  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  7s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  hadolint  |   0m  2s |  There were no new hadolint 
issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   ||| _ Other Tests _ |
   | +0 :ok: |  asflicense  |   0m  0s |  ASF License check generated no 
output?  |
   |  |   |   1m 54s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3750/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3750 |
   | Optional Tests | dupname asflicense hadolint shellcheck shelldocs |
   | uname | Linux bac700614cf4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / ede4d2715d |
   | Max. process+thread count | 46 (vs. ulimit of 3) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3750/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 shellcheck=0.4.6 
hadolint=1.17.5-0-g443423c |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942629889


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 25s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 48s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 14s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 55s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 46s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 16s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 52s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 48s |  hbase-compression-zstd in the 
patch passed.  |
   |  |   |  32m 12s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/3/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3748 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 3c0e0fe83539 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / ede4d2715d |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/3/testReport/
 |
   | Max. process+thread count | 341 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-compression/hbase-compression-zstd U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/3/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3750: HBASE-26359 Loosen Dockerfile pinned package versions for `create-release/mac-sshd-gpg-agent/Dockerfile`

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3750:
URL: https://github.com/apache/hbase/pull/3750#issuecomment-942630491


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  9s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  Maven dependency ordering for branch  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  7s |  Maven dependency ordering for patch  |
   ||| _ Other Tests _ |
   |  |   |   2m 39s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3750/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3750 |
   | Optional Tests |  |
   | uname | Linux 16c8e8eaaf3b 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / ede4d2715d |
   | Max. process+thread count | 41 (vs. ulimit of 3) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3750/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942634350


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  3s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 13s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m  4s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   9m  7s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m  5s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 47s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m  9s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 41s |  hbase-common in the patch passed.  
|
   | +1 :green_heart: |  unit  |   0m 46s |  hbase-compression-zstd in the 
patch passed.  |
   |  |   |  37m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/3/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3748 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 1fc6430da8a9 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / ede4d2715d |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/3/testReport/
 |
   | Max. process+thread count | 257 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-compression/hbase-compression-zstd U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/3/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3750: HBASE-26359 Loosen Dockerfile pinned package versions for `create-release/mac-sshd-gpg-agent/Dockerfile`

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3750:
URL: https://github.com/apache/hbase/pull/3750#issuecomment-942629701


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  6s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for branch  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  6s |  Maven dependency ordering for patch  |
   ||| _ Other Tests _ |
   |  |   |   2m 31s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3750/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3750 |
   | Optional Tests |  |
   | uname | Linux 0fe5fbd80216 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / ede4d2715d |
   | Max. process+thread count | 49 (vs. ulimit of 3) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3750/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3751: HBASE-26271: Cleanup the broken store files under data directory

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3751:
URL: https://github.com/apache/hbase/pull/3751#issuecomment-942608002


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-26067 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 34s |  HBASE-26067 passed  |
   | +1 :green_heart: |  compile  |   3m 29s |  HBASE-26067 passed  |
   | +1 :green_heart: |  shadedjars  |   8m 15s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  HBASE-26067 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   1m 18s |  root in the patch failed.  |
   | -1 :x: |  compile  |   0m 25s |  hbase-client in the patch failed.  |
   | -1 :x: |  compile  |   0m 41s |  hbase-server in the patch failed.  |
   | -1 :x: |  compile  |   0m 25s |  hbase-rest in the patch failed.  |
   | -0 :warning: |  javac  |   0m 25s |  hbase-client in the patch failed.  |
   | -0 :warning: |  javac  |   0m 41s |  hbase-server in the patch failed.  |
   | -0 :warning: |  javac  |   0m 25s |  hbase-rest in the patch failed.  |
   | -1 :x: |  shadedjars  |   3m 45s |  patch has 22 errors when building our 
shaded downstream artifacts.  |
   | -0 :warning: |  javadoc  |   0m 18s |  hbase-client in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 23s |  hbase-server in the patch failed.  |
   | -0 :warning: |  javadoc  |   0m 18s |  hbase-rest in the patch failed.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 59s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 39s |  hbase-hadoop-compat in the patch 
passed.  |
   | -1 :x: |  unit  |   0m 25s |  hbase-client in the patch failed.  |
   | -1 :x: |  unit  |   0m 41s |  hbase-server in the patch failed.  |
   | -1 :x: |  unit  |   0m 24s |  hbase-rest in the patch failed.  |
   |  |   |  34m 11s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3751 |
   | JIRA Issue | HBASE-26271 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux bd42b88d2855 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-26067 / 9c9789bbd2 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-jdk11-hadoop3-check/output/patch-mvninstall-root.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-client.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-server.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-rest.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-client.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-server.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-jdk11-hadoop3-check/output/patch-compile-hbase-rest.txt
 |
   | shadedjars | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-jdk11-hadoop3-check/output/patch-shadedjars.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-client.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yetus-jdk11-hadoop3-check/output/patch-javadoc-hbase-server.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3751/1/artifact/yet

[GitHub] [hbase] Apache-HBase commented on pull request #3750: HBASE-26359 Loosen Dockerfile pinned package versions for `create-release/mac-sshd-gpg-agent/Dockerfile`

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3750:
URL: https://github.com/apache/hbase/pull/3750#issuecomment-942567561






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3690: HBASE-26045 Master control the global throughput of all compaction servers

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3690:
URL: https://github.com/apache/hbase/pull/3690#issuecomment-942370039






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on pull request #3746: HBASE-26348 Implement a special procedure to migrate rs group informa…

2021-10-13 Thread GitBox


Apache9 commented on pull request #3746:
URL: https://github.com/apache/hbase/pull/3746#issuecomment-941150086


   @GeorryHuang PTAL.
   
   Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] joshelser commented on pull request #3679: HBASE-26267 Don't try to recover WALs from a WAL dir which doesn't exist

2021-10-13 Thread GitBox


joshelser commented on pull request #3679:
URL: https://github.com/apache/hbase/pull/3679#issuecomment-941524435


   Let me fix Duo's suggestion and then merge.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] z-york commented on pull request #3679: HBASE-26267 Don't try to recover WALs from a WAL dir which doesn't exist

2021-10-13 Thread GitBox


z-york commented on pull request #3679:
URL: https://github.com/apache/hbase/pull/3679#issuecomment-941670805


   > > I had solved this by just creating the walsDir here if it didn't exist, 
is there a reason you decided to skip it instead of create the dir? If we just 
create the directory it will naturally be empty and skip the rest of the 
"replayWals" function that you added.
   > 
   > No intentional reason :). It looks like `MasterRegion#createWAL()` is 
already doing the `mkdirs()` on the WAL dir. Happy to change if you feel 
strongly.
   
   Nope, I don't feel strongly. Feel free to merge once Duo signs off on the 
method naming.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] leyangyueshan commented on pull request #3524: HBASE-26117 RIT counts should be configured when cluster is balancing

2021-10-13 Thread GitBox


leyangyueshan commented on pull request #3524:
URL: https://github.com/apache/hbase/pull/3524#issuecomment-941875427






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3679: HBASE-26267 Don't try to recover WALs from a WAL dir which doesn't exist

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3679:
URL: https://github.com/apache/hbase/pull/3679#issuecomment-941622587






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on pull request #3577: HBASE-26185 Return mutable list in AssignmentManager#getExcludedServersForSystemTable

2021-10-13 Thread GitBox


ndimiduk commented on pull request #3577:
URL: https://github.com/apache/hbase/pull/3577#issuecomment-941140844






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-941854864






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3746: HBASE-26348 Implement a special procedure to migrate rs group informa…

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3746:
URL: https://github.com/apache/hbase/pull/3746#issuecomment-941201914






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] taklwu commented on pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …

2021-10-13 Thread GitBox


taklwu commented on pull request #2237:
URL: https://github.com/apache/hbase/pull/2237#issuecomment-941241199


   merged and if we need a new documentation, we can have a followup JIRA


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] GeorryHuang commented on pull request #3746: HBASE-26348 Implement a special procedure to migrate rs group informa…

2021-10-13 Thread GitBox


GeorryHuang commented on pull request #3746:
URL: https://github.com/apache/hbase/pull/3746#issuecomment-941862472


   LGTM


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] taklwu merged pull request #2237: HBASE-24833: Bootstrap should not delete the META table directory if …

2021-10-13 Thread GitBox


taklwu merged pull request #2237:
URL: https://github.com/apache/hbase/pull/2237


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk commented on a change in pull request #3743: HBASE-26350 Add a DEBUG when we fail the SASL handshake

2021-10-13 Thread GitBox


ndimiduk commented on a change in pull request #3743:
URL: https://github.com/apache/hbase/pull/3743#discussion_r727266507



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ServerRpcConnection.java
##
@@ -372,6 +372,7 @@ public void saslReadAndProcess(ByteBuff saslToken) throws 
IOException,
 replyToken = saslServer.evaluateResponse(saslToken.hasArray()?
 saslToken.array() : saslToken.toBytes());
   } catch (IOException e) {
+RpcServer.LOG.debug("Failed to execute SASL handshake", e);

Review comment:
   It's not constructing a log message string from many constituent parts, 
so I think it's fine without the guard.

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ServerRpcConnection.java
##
@@ -372,6 +372,7 @@ public void saslReadAndProcess(ByteBuff saslToken) throws 
IOException,
 replyToken = saslServer.evaluateResponse(saslToken.hasArray()?
 saslToken.array() : saslToken.toBytes());
   } catch (IOException e) {
+RpcServer.LOG.debug("Failed to execute SASL handshake", e);

Review comment:
   Why do you log on the other class's logger?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] nkalmar commented on pull request #3737: HBASE-26340 - fix RegionSizeCalculator getLEngth to bytes instead of …

2021-10-13 Thread GitBox


nkalmar commented on pull request #3737:
URL: https://github.com/apache/hbase/pull/3737#issuecomment-941196132






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] joshelser commented on a change in pull request #3721: HBASE-26326 CreateTableProcedure fails when FileBasedStoreFileTracker…

2021-10-13 Thread GitBox


joshelser commented on a change in pull request #3721:
URL: https://github.com/apache/hbase/pull/3721#discussion_r727605083



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/storefiletracker/StoreFileTracker.java
##
@@ -75,13 +76,13 @@ void replace(Collection compactedFiles, 
Collection
   StoreFileWriter createWriter(CreateStoreFileWriterParams params) throws 
IOException;
 
   /**
-   * Saves StoreFileTracker implementations specific configurations into the 
table descriptors.
+   * Adds StoreFileTracker implementations specific configurations into the 
table descriptor.
* 
* This is used to avoid accidentally data loss when changing the cluster 
level store file tracker
* implementation, and also possible misconfiguration between master and 
region servers.
* 
* See HBASE-26246 for more details.
* @param builder The table descriptor builder for the given table.
*/
-  void persistConfiguration(TableDescriptorBuilder builder);
+  TableDescriptor updateWithTrackerConfigs(TableDescriptorBuilder builder);

Review comment:
   Maybe `buildWithTrackerConfigs` if you're returning the 
`TableDescriptor` instead of the `TableDescriptorBuilder`. IMO, i'd just return 
the Builder back and let the caller `build()` when they're ready.

##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestCreateTableProcedure.java
##
@@ -105,6 +106,21 @@ public void testCreateWithTrackImpl() throws Exception {
 assertEquals(trackerName, htd.getValue(TRACKER_IMPL));
   }
 
+  @Test
+  public void testCreateWithFileBasedStoreTrackerImpl() throws Exception {
+ProcedureExecutor procExec = 
getMasterProcedureExecutor();
+
procExec.getEnvironment().getMasterConfiguration().set(StoreFileTrackerFactory.TRACKER_IMPL,
+  StoreFileTrackerFactory.Trackers.FILE.name());

Review comment:
   Does this need to be unset for other test methods in this class? Or, if 
tests are executed in parallel, maybe it would affects the other methods in 
this class? We set `forkCount` in surefire-plugin, but we don't set `parallel` 
(which I guess means we don't do parallel tests?). Genuinely not sure :)

##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestCreateTableProcedure.java
##
@@ -105,6 +106,21 @@ public void testCreateWithTrackImpl() throws Exception {
 assertEquals(trackerName, htd.getValue(TRACKER_IMPL));
   }
 
+  @Test
+  public void testCreateWithFileBasedStoreTrackerImpl() throws Exception {
+ProcedureExecutor procExec = 
getMasterProcedureExecutor();
+
procExec.getEnvironment().getMasterConfiguration().set(StoreFileTrackerFactory.TRACKER_IMPL,
+  StoreFileTrackerFactory.Trackers.FILE.name());

Review comment:
   ok, cool. I didn't _think_ we ran test methods concurrently, but was 
stymied when I tried to quickly figure that out from the pom.xml :P 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] leyangyueshan removed a comment on pull request #3524: HBASE-26117 RIT counts should be configured when cluster is balancing

2021-10-13 Thread GitBox


leyangyueshan removed a comment on pull request #3524:
URL: https://github.com/apache/hbase/pull/3524#issuecomment-941875427


   > This is hard coded in the past? Oh no.
   > 
   > And maybe it should be a percentage instead of a fixed value? Consider you 
have 10 regions, maybe 5 is too large but if you have 100k regions, 5 is too 
small?
   > 
   > WDYT?
   Thanks,
It’s should be configured!The percentage is complicated for business side I 
think。


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


Apache9 commented on a change in pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#discussion_r727647509



##
File path: 
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
##
@@ -123,4 +137,42 @@ static int getBufferSize(Configuration conf) {
 return size > 0 ? size : 256 * 1024; // Don't change this default
   }
 
+  static LoadingCache CACHE = CacheBuilder.newBuilder()

Review comment:
   Using Configuration as the key makes me a bit nervous, although after 
checking the code, there is no hashCode and equals methods in Configuration so 
it will perform like IdentityHashMap...
   
   So is it possible to use the file name as the map key here? I suppose 
different tables could use the same dict.

##
File path: 
hbase-compression/hbase-compression-zstd/src/main/java/org/apache/hadoop/hbase/io/compress/zstd/ZstdCodec.java
##
@@ -123,4 +137,42 @@ static int getBufferSize(Configuration conf) {
 return size > 0 ? size : 256 * 1024; // Don't change this default
   }
 
+  static LoadingCache CACHE = CacheBuilder.newBuilder()
+.maximumSize(100)
+.expireAfterAccess(1, TimeUnit.HOURS)
+.build(
+  new CacheLoader() {
+public byte[] load(Configuration conf) throws Exception {
+  final String s = conf.get(ZSTD_DICTIONARY_FILE_KEY);
+  if (s == null) {
+throw new IllegalArgumentException(ZSTD_DICTIONARY_FILE_KEY + " is 
not set");
+  }
+  final Path p = new Path(s);
+  final ByteArrayOutputStream baos = new ByteArrayOutputStream();
+  final byte[] buffer = new byte[8192];
+  try (final FSDataInputStream in = FileSystem.get(p.toUri(), 
conf).open(p)) {

Review comment:
   Do we need to limit the max dict size here? If an user create a table 
with a very large dict file, it could bring down the whole cluster if we do not 
truncate here?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] z-york commented on pull request #3712: HBASE-26320 Implement a separate thread pool for the LogCleaner

2021-10-13 Thread GitBox


z-york commented on pull request #3712:
URL: https://github.com/apache/hbase/pull/3712#issuecomment-941634995


   The unit test failures don't seem related and only fail on Java 11 it seems. 
@Apache9 Do you have any further comments? Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] nyl3532016 commented on a change in pull request #3690: HBASE-26045 Master control the global throughput of all compaction servers

2021-10-13 Thread GitBox


nyl3532016 commented on a change in pull request #3690:
URL: https://github.com/apache/hbase/pull/3690#discussion_r727759407



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/throttle/PressureAwareThroughputController.java
##
@@ -64,9 +64,9 @@
 }
   }
 
-  protected long maxThroughputUpperBound;
+  protected volatile long maxThroughputUpperBound;

Review comment:
   The old design recreate controller when conf change, It work correctly 
for the change is rarely happen. 
   Once create a new controller, the previous compaction tasks still control by 
the old controller.It means the actual throughput is controller num * 
throughput per controller.
   Here,the throughput of compaction server will change  frequently(by status 
report response from master), so we can not recreate new controller.
   
https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/compactions/Compactor.java#L414

##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/MiniHBaseCluster.java
##
@@ -107,6 +107,15 @@ public MiniHBaseCluster(Configuration conf, int 
numMasters, int numAlwaysStandBy
 regionserverClass);
   }
 
+  public MiniHBaseCluster(Configuration conf, int numMasters, int 
numAlwaysStandByMasters,

Review comment:
   OK, I will separate this issue

##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
##
@@ -1459,6 +1459,16 @@
*/
   CompletableFuture switchCompactionOffload(boolean enable);
 
+  /**
+   * update compaction server total throughput bound
+   * @param upperBound the total throughput upper bound of all compaction 
servers
+   * @param lowerBound the total throughput lower bound of all compaction 
servers
+   * @param offPeak the total throughput offPeak bound of all compaction 
servers

Review comment:
   we have config "hbase.hstore.compaction.throughput.offpeak", the offpeak 
throughput is a definite value, not lower or upper bound

##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
##
@@ -1459,6 +1459,16 @@
*/
   CompletableFuture switchCompactionOffload(boolean enable);
 
+  /**
+   * update compaction server total throughput bound
+   * @param upperBound the total throughput upper bound of all compaction 
servers
+   * @param lowerBound the total throughput lower bound of all compaction 
servers
+   * @param offPeak the total throughput offPeak bound of all compaction 
servers

Review comment:
   I will modify this function param, but some variable not need change 

##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
##
@@ -1459,6 +1459,16 @@
*/
   CompletableFuture switchCompactionOffload(boolean enable);
 
+  /**
+   * update compaction server total throughput bound
+   * @param upperBound the total throughput upper bound of all compaction 
servers
+   * @param lowerBound the total throughput lower bound of all compaction 
servers
+   * @param offPeak the total throughput offPeak bound of all compaction 
servers

Review comment:
   I will modify this function param, but some variable like  
`maxThroughputOffPeak` not need change 

##
File path: 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
##
@@ -3822,6 +3824,26 @@ private void getProcedureResult(long procId, 
CompletableFuture future, int
 return future;
   }
 
+  @Override
+  public CompletableFuture> 
updateCompactionServerTotalThroughput(long upperBound,
+  long lowerBound, long offPeak) {
+CompletableFuture> future = this.> 
newMasterCaller().action(
+  (controller, stub) -> this.> call(
+controller, stub, 
UpdateCompactionServerTotalThroughputRequest.newBuilder()
+
.setMaxThroughputUpperBound(upperBound).setMaxThroughputLowerBound(lowerBound)
+.setMaxThroughputOffPeak(offPeak).build(),
+  (s, c, req, done) -> s.updateCompactionServerTotalThroughput(c, req, 
done), resp -> {
+Map result = new HashMap<>();
+result.put("UpperBound", resp.getMaxThroughputUpperBound());

Review comment:
   the return value of `updateCompactionServerTotalThroughput` only used by 
hbase shell, and return this map is convenient to print result on console
   
https://github.com/apache/hbase/blob/29c816be6c0c4a48934c57e4a38732b4f680120f/hbase-shell/src/main/ruby/shell/commands/set_compaction_server_throughput_upper_bound.rb#L39




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil commented on a change in pull request #3721: HBASE-26326 CreateTableProcedure fails when FileBasedStoreFileTracker…

2021-10-13 Thread GitBox


wchevreuil commented on a change in pull request #3721:
URL: https://github.com/apache/hbase/pull/3721#discussion_r727458552



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/storefiletracker/MigrationStoreFileTracker.java
##
@@ -89,14 +90,16 @@ void set(List files) {
   }
 
   @Override
-  public void persistConfiguration(TableDescriptorBuilder builder) {
-super.persistConfiguration(builder);
-if (StringUtils.isEmpty(builder.getValue(SRC_IMPL))) {
+  public TableDescriptor updateWithTrackerConfigs(TableDescriptor descriptor) {

Review comment:
   Done.

##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestCreateTableProcedure.java
##
@@ -105,6 +106,21 @@ public void testCreateWithTrackImpl() throws Exception {
 assertEquals(trackerName, htd.getValue(TRACKER_IMPL));
   }
 
+  @Test
+  public void testCreateWithFileBasedStoreTrackerImpl() throws Exception {
+ProcedureExecutor procExec = 
getMasterProcedureExecutor();
+
procExec.getEnvironment().getMasterConfiguration().set(StoreFileTrackerFactory.TRACKER_IMPL,
+  StoreFileTrackerFactory.Trackers.FILE.name());

Review comment:
   Yes, just checked now and found out that this relies on a static 
HBaseTestingUtil variable declared in the parent TestTableDDLProcedureBase 
class, so any concurrent sub-classes instances of that could end up picking 
that config. 
   
   I don't think this would be a problem for other tests, though, since none of 
those are validated store file tracking work itself, and whatever is being 
tested elsewhere shouldn't break because of this setting (if it does break some 
other test, then it's probably a regression we are introducing here).
   
   The only potential problem is if both this 
testCreateWithFileBasedStoreTrackerImpl and testCreateWithTrackImpl test 
methods run concurrently, but I don't think that happens for methods within the 
same test class.

##
File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/procedure/TestCreateTableProcedure.java
##
@@ -105,6 +106,21 @@ public void testCreateWithTrackImpl() throws Exception {
 assertEquals(trackerName, htd.getValue(TRACKER_IMPL));
   }
 
+  @Test
+  public void testCreateWithFileBasedStoreTrackerImpl() throws Exception {
+ProcedureExecutor procExec = 
getMasterProcedureExecutor();
+
procExec.getEnvironment().getMasterConfiguration().set(StoreFileTrackerFactory.TRACKER_IMPL,
+  StoreFileTrackerFactory.Trackers.FILE.name());

Review comment:
   Yes, just checked now and found out that this relies on a static 
HBaseTestingUtil variable declared in the parent TestTableDDLProcedureBase 
class, so any concurrent sub-classes instances of that could end up picking 
that config. 
   
   I don't think this would be a problem for other tests, though, since none of 
those are validating store file tracking work itself, and whatever is being 
tested elsewhere shouldn't break because of this setting (if it does break some 
other test, then it's probably a regression we are introducing here).
   
   The only potential problem is if both this 
testCreateWithFileBasedStoreTrackerImpl and testCreateWithTrackImpl test 
methods run concurrently, but I don't think that happens for methods within the 
same test class.

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/storefiletracker/StoreFileTracker.java
##
@@ -75,13 +76,13 @@ void replace(Collection compactedFiles, 
Collection
   StoreFileWriter createWriter(CreateStoreFileWriterParams params) throws 
IOException;
 
   /**
-   * Saves StoreFileTracker implementations specific configurations into the 
table descriptors.
+   * Adds StoreFileTracker implementations specific configurations into the 
table descriptor.
* 
* This is used to avoid accidentally data loss when changing the cluster 
level store file tracker
* implementation, and also possible misconfiguration between master and 
region servers.
* 
* See HBASE-26246 for more details.
* @param builder The table descriptor builder for the given table.
*/
-  void persistConfiguration(TableDescriptorBuilder builder);
+  TableDescriptor updateWithTrackerConfigs(TableDescriptorBuilder builder);

Review comment:
   > IMO, i'd just return the Builder back and let the caller build() when 
they're ready.
   
   Yeah, I guess that would be better. WDYT, @Apache9 ?
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase-connectors] Apache-HBase commented on pull request #86: HBASE-26354 [hbase-connectors] Added python client for HBase thrift service

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #86:
URL: https://github.com/apache/hbase-connectors/pull/86#issuecomment-941841232






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3721: HBASE-26326 CreateTableProcedure fails when FileBasedStoreFileTracker…

2021-10-13 Thread GitBox


Apache9 commented on a change in pull request #3721:
URL: https://github.com/apache/hbase/pull/3721#discussion_r728004188



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/storefiletracker/StoreFileTracker.java
##
@@ -75,13 +76,13 @@ void replace(Collection compactedFiles, 
Collection
   StoreFileWriter createWriter(CreateStoreFileWriterParams params) throws 
IOException;
 
   /**
-   * Saves StoreFileTracker implementations specific configurations into the 
table descriptors.
+   * Adds StoreFileTracker implementations specific configurations into the 
table descriptor.
* 
* This is used to avoid accidentally data loss when changing the cluster 
level store file tracker
* implementation, and also possible misconfiguration between master and 
region servers.
* 
* See HBASE-26246 for more details.
* @param builder The table descriptor builder for the given table.
*/
-  void persistConfiguration(TableDescriptorBuilder builder);
+  TableDescriptor updateWithTrackerConfigs(TableDescriptorBuilder builder);

Review comment:
   Just return void is enough. The upper layer could build it after calling 
this method.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3721: HBASE-26326 CreateTableProcedure fails when FileBasedStoreFileTracker…

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3721:
URL: https://github.com/apache/hbase/pull/3721#issuecomment-941532157






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] wchevreuil merged pull request #3721: HBASE-26326 CreateTableProcedure fails when FileBasedStoreFileTracker…

2021-10-13 Thread GitBox


wchevreuil merged pull request #3721:
URL: https://github.com/apache/hbase/pull/3721


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


Apache9 commented on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-941845844


   > Note: Beware, if the dictionary is lost, the data will not be 
decompressable.
   
   Haven't read the code yet, but is it possible to copy the dict into the 
hbase storage so it is controlled by us?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] joshelser commented on a change in pull request #3743: HBASE-26350 Add a DEBUG when we fail the SASL handshake

2021-10-13 Thread GitBox


joshelser commented on a change in pull request #3743:
URL: https://github.com/apache/hbase/pull/3743#discussion_r727332522



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ServerRpcConnection.java
##
@@ -372,6 +372,7 @@ public void saslReadAndProcess(ByteBuff saslToken) throws 
IOException,
 replyToken = saslServer.evaluateResponse(saslToken.hasArray()?
 saslToken.array() : saslToken.toBytes());
   } catch (IOException e) {
+RpcServer.LOG.debug("Failed to execute SASL handshake", e);

Review comment:
   Yup! slf4j largely prevents this from being an expensive call (the pain 
is often concatenating the strings together).
   
   Thanks for looking!

##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/ServerRpcConnection.java
##
@@ -372,6 +372,7 @@ public void saslReadAndProcess(ByteBuff saslToken) throws 
IOException,
 replyToken = saslServer.evaluateResponse(saslToken.hasArray()?
 saslToken.array() : saslToken.toBytes());
   } catch (IOException e) {
+RpcServer.LOG.debug("Failed to execute SASL handshake", e);

Review comment:
   Good question. Only because that's what every other log message in this 
class does.
   
   I'm guessing this was an attempt to consolidate RPC debugging to a single 
class (when we had both NIO and Netty impls), but I'm not 100% sure.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] nkalmar edited a comment on pull request #3737: HBASE-26340 - fix RegionSizeCalculator getLEngth to bytes instead of …

2021-10-13 Thread GitBox


nkalmar edited a comment on pull request #3737:
URL: https://github.com/apache/hbase/pull/3737#issuecomment-941196132






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] attilapiros commented on pull request #3737: HBASE-26340 - fix RegionSizeCalculator getLEngth to bytes instead of …

2021-10-13 Thread GitBox


attilapiros commented on pull request #3737:
URL: https://github.com/apache/hbase/pull/3737#issuecomment-941163910


   Sorry I did not get this change. 
   
   I think the calculated values will be the same before and after this PR. 
Especially as the tests are not updated, i.e.:
   
https://github.com/apache/hbase/blob/6e6393350b54db486e103cf5bd2dd6869728aecc/hbase-mapreduce/src/test/java/org/apache/hadoop/hbase/mapreduce/TestRegionSizeCalculator.java#L70-L72
   
   Do I miss something?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] GeorryHuang commented on a change in pull request #3746: HBASE-26348 Implement a special procedure to migrate rs group informa…

2021-10-13 Thread GitBox


GeorryHuang commented on a change in pull request #3746:
URL: https://github.com/apache/hbase/pull/3746#discussion_r727654229



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ModifyTableDescriptorProcedure.java
##
@@ -0,0 +1,161 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.procedure;
+
+import java.io.IOException;
+import java.util.Optional;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.procedure2.ProcedureStateSerializer;
+import org.apache.hadoop.hbase.procedure2.ProcedureSuspendedException;
+import org.apache.hadoop.hbase.procedure2.ProcedureYieldException;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.ModifyTableDescriptorState;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.ModifyTableDescriptorStateData;
+
+/**
+ * The procedure will only update the table descriptor without reopening all 
the regions.
+ * 
+ * It is usually used for migrating when upgrading, where we need to add 
something into the table
+ * descriptor, such as the rs group information.
+ */
+@InterfaceAudience.Private
+public abstract class ModifyTableDescriptorProcedure
+  extends AbstractStateMachineTableProcedure {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(ModifyTableProcedure.class);

Review comment:
   We should use `ModifyTableDescriptorProcedure` as Logger class?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3747: [HBASE-26351] Implement a table-based region grouping strategy for Re…

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3747:
URL: https://github.com/apache/hbase/pull/3747#issuecomment-941203857






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache9 commented on a change in pull request #3746: HBASE-26348 Implement a special procedure to migrate rs group informa…

2021-10-13 Thread GitBox


Apache9 commented on a change in pull request #3746:
URL: https://github.com/apache/hbase/pull/3746#discussion_r727654807



##
File path: 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ModifyTableDescriptorProcedure.java
##
@@ -0,0 +1,161 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.master.procedure;
+
+import java.io.IOException;
+import java.util.Optional;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.TableDescriptor;
+import org.apache.hadoop.hbase.procedure2.ProcedureStateSerializer;
+import org.apache.hadoop.hbase.procedure2.ProcedureSuspendedException;
+import org.apache.hadoop.hbase.procedure2.ProcedureYieldException;
+import org.apache.yetus.audience.InterfaceAudience;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.hbase.shaded.protobuf.ProtobufUtil;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.ModifyTableDescriptorState;
+import 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProcedureProtos.ModifyTableDescriptorStateData;
+
+/**
+ * The procedure will only update the table descriptor without reopening all 
the regions.
+ * 
+ * It is usually used for migrating when upgrading, where we need to add 
something into the table
+ * descriptor, such as the rs group information.
+ */
+@InterfaceAudience.Private
+public abstract class ModifyTableDescriptorProcedure
+  extends AbstractStateMachineTableProcedure {
+
+  private static final Logger LOG = 
LoggerFactory.getLogger(ModifyTableProcedure.class);

Review comment:
   Ah, yes, this is a typo. Let me fix.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] shahrs87 commented on pull request #3577: HBASE-26185 Return mutable list in AssignmentManager#getExcludedServersForSystemTable

2021-10-13 Thread GitBox


shahrs87 commented on pull request #3577:
URL: https://github.com/apache/hbase/pull/3577#issuecomment-941144839






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3741: HBASE-26344 Fix Bug for MultiByteBuff.put(int, byte)

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3741:
URL: https://github.com/apache/hbase/pull/3741#issuecomment-941867866






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942585426


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 30s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 20s |  master passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  master passed  |
   | +1 :green_heart: |  spotbugs  |   1m 21s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 10s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  the patch passed  |
   | -0 :warning: |  javac  |   0m 27s |  
hbase-compression_hbase-compression-zstd generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0)  |
   | -0 :warning: |  checkstyle  |   0m 26s |  hbase-common: The patch 
generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5)  |
   | -0 :warning: |  checkstyle  |   0m 12s |  
hbase-compression/hbase-compression-zstd: The patch generated 1 new + 2 
unchanged - 0 fixed = 3 total (was 2)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |  22m  0s |  Patch does not cause any 
errors with Hadoop 3.1.2 3.2.1 3.3.0.  |
   | +1 :green_heart: |  spotbugs  |   1m 42s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  48m 51s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/2/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3748 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
checkstyle compile |
   | uname | Linux 61c6a7457421 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / ede4d2715d |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | javac | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/2/artifact/yetus-general-check/output/diff-compile-javac-hbase-compression_hbase-compression-zstd.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-common.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/2/artifact/yetus-general-check/output/diff-checkstyle-hbase-compression_hbase-compression-zstd.txt
 |
   | Max. process+thread count | 96 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-compression/hbase-compression-zstd U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-26271) Cleanup the broken store files under data directory

2021-10-13 Thread Szabolcs Bukros (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17428394#comment-17428394
 ] 

Szabolcs Bukros commented on HBASE-26271:
-

[~zhangduo] prepared a new PR, on the correct branch this time. Also added 
metrics for the chore and a REST endpoint to easily access those metrics. Could 
you please take a look? [Please find it 
here.|https://github.com/apache/hbase/pull/3751]

> Cleanup the broken store files under data directory
> ---
>
> Key: HBASE-26271
> URL: https://issues.apache.org/jira/browse/HBASE-26271
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile
>Reporter: Duo Zhang
>Assignee: Szabolcs Bukros
>Priority: Major
>
> As for some new store file tracker implementation, we allow flush/compaction 
> to write directly to data directory, so if we crash in the middle, there will 
> be broken store files left in the data directory.
> We should find a proper way to delete these broken files.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] BukrosSzabolcs opened a new pull request #3751: HBASE-26271: Cleanup the broken store files under data directory

2021-10-13 Thread GitBox


BukrosSzabolcs opened a new pull request #3751:
URL: https://github.com/apache/hbase/pull/3751


   Add new chore to delete lefotver files in case file based storefile
   handling is used
   Expose the target files of currently running compactions for easier
   validation
   Store chore metrics on RS and push them for aggregation to Master
   Add a new REST endpoint to get chore metrics from Master


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942575775


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  5s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   5m  9s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   9m  8s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 15s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m  3s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m 10s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   1m 36s |  hbase-common in the patch failed.  |
   | +1 :green_heart: |  unit  |   0m 47s |  hbase-compression-zstd in the 
patch passed.  |
   |  |   |  36m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/2/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3748 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 4c1685836c68 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / ede4d2715d |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/2/artifact/yetus-jdk11-hadoop3-check/output/patch-unit-hbase-common.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/2/testReport/
 |
   | Max. process+thread count | 271 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-compression/hbase-compression-zstd U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3748: HBASE-26353 Support loadable dictionaries in hbase-compression-zstd

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3748:
URL: https://github.com/apache/hbase/pull/3748#issuecomment-942571771


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 26s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 44s |  master passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  master passed  |
   | +1 :green_heart: |  shadedjars  |   8m 17s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 18s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 51s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 45s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   8m 15s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   1m  5s |  hbase-common in the patch failed.  |
   | +1 :green_heart: |  unit  |   0m 48s |  hbase-compression-zstd in the 
patch passed.  |
   |  |   |  31m 23s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/2/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3748 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 7a564a7fca1a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / ede4d2715d |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | unit | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/2/artifact/yetus-jdk8-hadoop3-check/output/patch-unit-hbase-common.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/2/testReport/
 |
   | Max. process+thread count | 341 (vs. ulimit of 3) |
   | modules | C: hbase-common hbase-compression/hbase-compression-zstd U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3748/2/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3750: HBASE-26359 Loosen Dockerfile pinned package versions for `create-release/mac-sshd-gpg-agent/Dockerfile`

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3750:
URL: https://github.com/apache/hbase/pull/3750#issuecomment-942567945


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 59s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 17s |  Maven dependency ordering for branch  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  7s |  Maven dependency ordering for patch  |
   ||| _ Other Tests _ |
   |  |   |   2m 25s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3750/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3750 |
   | Optional Tests |  |
   | uname | Linux 11eaeb5a4eeb 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / ede4d2715d |
   | Max. process+thread count | 56 (vs. ulimit of 3) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3750/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3750: HBASE-26359 Loosen Dockerfile pinned package versions for `create-release/mac-sshd-gpg-agent/Dockerfile`

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3750:
URL: https://github.com/apache/hbase/pull/3750#issuecomment-942567766


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 27s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  7s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  hadolint  |   0m  1s |  There were no new hadolint 
issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   ||| _ Other Tests _ |
   | +0 :ok: |  asflicense  |   0m  0s |  ASF License check generated no 
output?  |
   |  |   |   2m  7s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3750/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3750 |
   | Optional Tests | dupname asflicense hadolint shellcheck shelldocs |
   | uname | Linux 922282ad743f 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / ede4d2715d |
   | Max. process+thread count | 46 (vs. ulimit of 3) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3750/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 shellcheck=0.4.6 
hadolint=1.17.5-0-g443423c |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3750: HBASE-26359 Loosen Dockerfile pinned package versions for `create-release/mac-sshd-gpg-agent/Dockerfile`

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3750:
URL: https://github.com/apache/hbase/pull/3750#issuecomment-942567561


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 28s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for branch  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  7s |  Maven dependency ordering for patch  |
   ||| _ Other Tests _ |
   |  |   |   1m 45s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3750/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3750 |
   | Optional Tests |  |
   | uname | Linux 559b8efcb069 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / ede4d2715d |
   | Max. process+thread count | 50 (vs. ulimit of 3) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3750/1/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] ndimiduk opened a new pull request #3750: HBASE-26359 Loosen Dockerfile pinned package versions for `create-release/mac-sshd-gpg-agent/Dockerfile`

2021-10-13 Thread GitBox


ndimiduk opened a new pull request #3750:
URL: https://github.com/apache/hbase/pull/3750


   Details are identical as on https://issues.apache.org/jira/browse/HBASE-24631


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (HBASE-26359) Loosen Dockerfile pinned package versions for `create-release/mac-sshd-gpg-agent/Dockerfile`

2021-10-13 Thread Nick Dimiduk (Jira)
Nick Dimiduk created HBASE-26359:


 Summary: Loosen Dockerfile pinned package versions for 
`create-release/mac-sshd-gpg-agent/Dockerfile`
 Key: HBASE-26359
 URL: https://issues.apache.org/jira/browse/HBASE-26359
 Project: HBase
  Issue Type: Task
  Components: build, community
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk


We need to apply a similar fix as was done for other dockerfiles in HBASE-24631.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3721: HBASE-26326 CreateTableProcedure fails when FileBasedStoreFileTracker…

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3721:
URL: https://github.com/apache/hbase/pull/3721#issuecomment-942561764


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 18s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-26067 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 17s |  HBASE-26067 passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  HBASE-26067 passed  |
   | +1 :green_heart: |  shadedjars  |   9m 16s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  HBASE-26067 passed  |
   | -0 :warning: |  patch  |  10m  2s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  2s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   9m  8s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 220m 33s |  hbase-server in the patch passed.  
|
   |  |   | 253m 52s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3721/17/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3721 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux bafde08461d1 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-26067 / c4d7d28911 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3721/17/testReport/
 |
   | Max. process+thread count | 3316 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3721/17/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3721: HBASE-26326 CreateTableProcedure fails when FileBasedStoreFileTracker…

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3721:
URL: https://github.com/apache/hbase/pull/3721#issuecomment-942489990


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 39s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  4s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-26067 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 41s |  HBASE-26067 passed  |
   | +1 :green_heart: |  compile  |   1m 34s |  HBASE-26067 passed  |
   | +1 :green_heart: |  shadedjars  |  10m 20s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  HBASE-26067 passed  |
   | -0 :warning: |  patch  |  11m 19s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 43s |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 37s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |  11m  0s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 153m 22s |  hbase-server in the patch passed.  
|
   |  |   | 193m 38s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3721/17/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3721 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 9dc663f47ebb 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-26067 / c4d7d28911 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3721/17/testReport/
 |
   | Max. process+thread count | 4119 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3721/17/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HBASE-21154) Remove hbase:namespace table; fold it into hbase:meta

2021-10-13 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17428331#comment-17428331
 ] 

Sean Busbey commented on HBASE-21154:
-

FWIW I think it would benefit us for many reasons to get this into branch-2 
releases. However, I would vote against a non-beta release that included this 
change currently due to the lack of documentation mentioned earlier in this 
jira and tracked by HBASE-21533. We're going on 3 years since that issue's 
creation, so I'm not sure it's likely we can get something together in time for 
cutting a branch for 2.5.0 RCs at the end of the month.

> Remove hbase:namespace table; fold it into hbase:meta
> -
>
> Key: HBASE-21154
> URL: https://issues.apache.org/jira/browse/HBASE-21154
> Project: HBase
>  Issue Type: Improvement
>  Components: meta
>Reporter: Michael Stack
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0-alpha-1
>
> Attachments: HBASE-21154-v1.patch, HBASE-21154-v2.patch, 
> HBASE-21154-v4.patch, HBASE-21154-v5.patch, HBASE-21154-v6.patch, 
> HBASE-21154-v7.patch, HBASE-21154.patch
>
>
> Namespace table is a small system table. Usually it has two rows. It must be 
> assigned before user tables but after hbase:meta goes out. Its presence 
> complicates our startup and is a constant source of grief when for whatever 
> reason, it is not up and available. In fact, master startup is predicated on 
> hbase:namespace being assigned and will not make progress unless it is up.
> Lets just add a new 'ns' column family to hbase:meta for namespace.
> Here is a default ns table content:
> {code}
> hbase(main):023:0* scan 'hbase:namespace'
> ROW   
>COLUMN+CELL
>  default  
>column=info:d, timestamp=1526694059106, 
> value=\x0A\x07default
>  hbase
>column=info:d, timestamp=1526694059461, 
> value=\x0A\x05hbase
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HBASE-26286) Add support for specifying store file tracker when restoring or cloning snapshot

2021-10-13 Thread Josh Elser (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-26286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17428305#comment-17428305
 ] 

Josh Elser commented on HBASE-26286:


Just catching up with Wellington and Szabolcs and since Wellington is close to 
a patch in HBASE-26328, Szabolcs can try to pick this up. Sounds like a 
super-useful tool to make a non-destructive way to play with the new SFT impl.

> Add support for specifying store file tracker when restoring or cloning 
> snapshot
> 
>
> Key: HBASE-26286
> URL: https://issues.apache.org/jira/browse/HBASE-26286
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, snapshots
>Reporter: Duo Zhang
>Assignee: Szabolcs Bukros
>Priority: Major
>
> As discussed in HBASE-26280.
> https://issues.apache.org/jira/browse/HBASE-26280?focusedCommentId=17414894&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17414894



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (HBASE-26286) Add support for specifying store file tracker when restoring or cloning snapshot

2021-10-13 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned HBASE-26286:
--

Assignee: Szabolcs Bukros

> Add support for specifying store file tracker when restoring or cloning 
> snapshot
> 
>
> Key: HBASE-26286
> URL: https://issues.apache.org/jira/browse/HBASE-26286
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, snapshots
>Reporter: Duo Zhang
>Assignee: Szabolcs Bukros
>Priority: Major
>
> As discussed in HBASE-26280.
> https://issues.apache.org/jira/browse/HBASE-26280?focusedCommentId=17414894&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17414894



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hbase] Apache-HBase commented on pull request #3690: HBASE-26045 Master control the global throughput of all compaction servers

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3690:
URL: https://github.com/apache/hbase/pull/3690#issuecomment-942426290


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 31s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-25714 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   3m 38s |  HBASE-25714 passed  |
   | +1 :green_heart: |  compile  |   3m  8s |  HBASE-25714 passed  |
   | +1 :green_heart: |  shadedjars  |   7m 51s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  HBASE-25714 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 38s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 10s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 10s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m 41s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 46s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 13s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 149m 42s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  unit  |   4m 44s |  hbase-thrift in the patch passed.  
|
   | +1 :green_heart: |  unit  |   7m 14s |  hbase-shell in the patch passed.  |
   |  |   | 200m 58s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3690/5/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3690 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux 19189177da74 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-25714 / e3553ad5d9 |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3690/5/testReport/
 |
   | Max. process+thread count | 4465 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift 
hbase-shell U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3690/5/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hbase] Apache-HBase commented on pull request #3690: HBASE-26045 Master control the global throughput of all compaction servers

2021-10-13 Thread GitBox


Apache-HBase commented on pull request #3690:
URL: https://github.com/apache/hbase/pull/3690#issuecomment-942425224


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  5s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-25714 Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   4m 11s |  HBASE-25714 passed  |
   | +1 :green_heart: |  compile  |   3m 41s |  HBASE-25714 passed  |
   | +1 :green_heart: |  shadedjars  |   7m 45s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 28s |  HBASE-25714 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 16s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 11s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 43s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m 43s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   7m 47s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 24s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  1s |  hbase-protocol-shaded in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 18s |  hbase-client in the patch passed.  
|
   | +1 :green_heart: |  unit  | 144m 36s |  hbase-server in the patch passed.  
|
   | +1 :green_heart: |  unit  |   5m  8s |  hbase-thrift in the patch passed.  
|
   | +1 :green_heart: |  unit  |   7m 19s |  hbase-shell in the patch passed.  |
   |  |   | 199m 44s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3690/5/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/3690 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux cbf084001495 4.15.0-156-generic #163-Ubuntu SMP Thu Aug 19 
23:31:58 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-25714 / e3553ad5d9 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3690/5/testReport/
 |
   | Max. process+thread count | 3964 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-client hbase-server hbase-thrift 
hbase-shell U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/HBase/job/HBase-PreCommit-GitHub-PR/job/PR-3690/5/console
 |
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work started] (HBASE-26328) Clone snapshot doesn't load reference files into FILE SFT impl

2021-10-13 Thread Wellington Chevreuil (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-26328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-26328 started by Wellington Chevreuil.

> Clone snapshot doesn't load reference files into FILE SFT impl
> --
>
> Key: HBASE-26328
> URL: https://issues.apache.org/jira/browse/HBASE-26328
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, snapshots
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
>
> After cloning a snapshot from a FILE SFT enabled table, noticed that none of 
> the cloned table files were added into the FILE SFT meta files, in fact, FILE 
> SFT meta dir didn't even got created. Scanning this cloned table gives no 
> results, as none of the files are tracked. I believe we need to call 
> StoreFileTracker.add during the snapshot cloning.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >