[jira] [Commented] (HBASE-21521) Expose master startup status via JMX and web UI

2022-09-06 Thread Xiaolin Ha (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-21521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17600662#comment-17600662
 ] 

Xiaolin Ha commented on HBASE-21521:


This is a very helpful issue. We can use info showed by it to improve the 
startup progress of the Master, especially for a whole cluster failure, with a 
lot of regionservers crashed. [~shahrs87] , is there any progress?

> Expose master startup status via JMX and web UI
> ---
>
> Key: HBASE-21521
> URL: https://issues.apache.org/jira/browse/HBASE-21521
> Project: HBase
>  Issue Type: Improvement
>  Components: master, UI
>Reporter: Andrew Kyle Purtell
>Assignee: Rushabh Shah
>Priority: Major
> Attachments: hbase-21521-1.png, hbase-21521-2.png, hbase-21521-3.png, 
> hbase-21521-4.png, hbase-21521-revised-1.png, hbase-21521-revised-2.png
>
>
> Add an internal API to the master for tracking startup progress. Expose this 
> information via JMX.
> Modify the master to bring the web UI up sooner. Will require tweaks to 
> various views to prevent attempts to retrieve state before the master fully 
> up (or else expect NPEs). Currently, before the master has fully initialized 
> an attempt to use the web UI will return a 500 error code and display an 
> error page.
> Finally, update the web UI to display startup progress, like HDFS-4249. 
> Filing this for branch-1. Need to check what if anything is available or 
> improved in branch-2 and master.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #4766: Update README.txt

2022-09-06 Thread GitBox


Apache-HBase commented on PR #4766:
URL: https://github.com/apache/hbase/pull/4766#issuecomment-1237989005

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m  1s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   2m 11s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4766/1/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4766 |
   | Optional Tests |  |
   | uname | Linux 2a52d39a1370 5.4.0-124-generic #140-Ubuntu SMP Thu Aug 4 
02:23:37 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 72c1a84750 |
   | Max. process+thread count | 37 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4766/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4766: Update README.txt

2022-09-06 Thread GitBox


Apache-HBase commented on PR #4766:
URL: https://github.com/apache/hbase/pull/4766#issuecomment-1237989158

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 37s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  3s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ master Compile Tests _ |
   ||| _ Patch Compile Tests _ |
   ||| _ Other Tests _ |
   |  |   |   2m 15s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4766/1/artifact/yetus-jdk8-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4766 |
   | Optional Tests |  |
   | uname | Linux 369eb7926c04 5.4.0-1081-aws #88~18.04.1-Ubuntu SMP Thu Jun 
23 16:29:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 72c1a84750 |
   | Max. process+thread count | 33 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4766/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4766: Update README.txt

2022-09-06 Thread GitBox


Apache-HBase commented on PR #4766:
URL: https://github.com/apache/hbase/pull/4766#issuecomment-1237990836

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 54s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  spotless  |   0m 49s |  branch has no errors when 
running spotless:check.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  spotless  |   0m 43s |  patch has no errors when 
running spotless:check.  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |   3m 53s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4766/1/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4766 |
   | Optional Tests | dupname asflicense spotless |
   | uname | Linux 8a432e323281 5.4.0-1081-aws #88~18.04.1-Ubuntu SMP Thu Jun 
23 16:29:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 72c1a84750 |
   | Max. process+thread count | 33 (vs. ulimit of 3) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4766/1/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase-filesystem] steveloughran commented on pull request #35: HBASE-26483. [HBOSS] add support for createFile() and openFile(path)

2022-09-06 Thread GitBox


steveloughran commented on PR #35:
URL: https://github.com/apache/hbase-filesystem/pull/35#issuecomment-1237998614

   @apurtell can you take another look at this? made sure it is working when 
tested against s3 through various hadoop releases


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on pull request #4743: HBASE-27238 Backport backup restore to 2.x

2022-09-06 Thread GitBox


bbeaudreault commented on PR #4743:
URL: https://github.com/apache/hbase/pull/4743#issuecomment-1238019595

   Hey @rda3mon, thanks for the heads up but actually we need to submit this 
against branch-2. This feature might be too large to land in 2.5 now that it's 
released. Instead it will probably land in 2.6. 
   
   Can you close this and submit against branch-2 instead?
   
   Sorry I didn't see this last week because the iOS app doesn't make it easy 
to see the target branch. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault commented on pull request #4743: HBASE-27238 Backport backup restore to 2.x

2022-09-06 Thread GitBox


bbeaudreault commented on PR #4743:
URL: https://github.com/apache/hbase/pull/4743#issuecomment-1238022542

   Generally I always submit PRs against master/branch-2 first. Then can do 
release branches like branch-2.5 after, if we decide to backport further. But 
needs to be in branch-2 otherwise the feature would disappear from future 2.x 
releases. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27332) Remove RejectedExecutionHandler for long/short compaction thread pools

2022-09-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17600738#comment-17600738
 ] 

Hudson commented on HBASE-27332:


Results for branch branch-2
[build #637 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/637/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/637/General_20Nightly_20Build_20Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/637/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/637/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/637/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove RejectedExecutionHandler for long/short compaction thread pools
> --
>
> Key: HBASE-27332
> URL: https://issues.apache.org/jira/browse/HBASE-27332
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 3.0.0-alpha-3, 2.4.13
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Minor
> Fix For: 2.6.0, 2.5.1, 3.0.0-alpha-4, 2.4.15
>
>
> As disscussed in https://github.com/apache/hbase/pull/4725
> actually the max size of StealJobQueue is bouded by the VM limit of an array, 
> and the OOM exception occurs before the rejection handler. So StealJobQueue 
> is equivalent to be unbounded. I think the RejectionHandler may bring some 
> confusions and make the codes a little puzzle.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27332) Remove RejectedExecutionHandler for long/short compaction thread pools

2022-09-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17600746#comment-17600746
 ] 

Hudson commented on HBASE-27332:


Results for branch branch-2.4
[build #421 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/421/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/421/General_20Nightly_20Build_20Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/421/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/421/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/421/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove RejectedExecutionHandler for long/short compaction thread pools
> --
>
> Key: HBASE-27332
> URL: https://issues.apache.org/jira/browse/HBASE-27332
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 3.0.0-alpha-3, 2.4.13
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Minor
> Fix For: 2.6.0, 2.5.1, 3.0.0-alpha-4, 2.4.15
>
>
> As disscussed in https://github.com/apache/hbase/pull/4725
> actually the max size of StealJobQueue is bouded by the VM limit of an array, 
> and the OOM exception occurs before the rejection handler. So StealJobQueue 
> is equivalent to be unbounded. I think the RejectionHandler may bring some 
> confusions and make the codes a little puzzle.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] wchevreuil commented on a diff in pull request #4756: HBASE-27354 EOF thrown by WALEntryStream causes replication blocking

2022-09-06 Thread GitBox


wchevreuil commented on code in PR #4756:
URL: https://github.com/apache/hbase/pull/4756#discussion_r963639465


##
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/WALEntryStream.java:
##
@@ -255,15 +255,21 @@ private void dequeueCurrentLog() throws IOException {
* Returns whether the file is opened for writing.
*/
   private boolean readNextEntryAndRecordReaderPosition() throws IOException {
+long prePos = reader.getPosition();

Review Comment:
   > It's not a big problem if it just throws EOF as we'll retry. The big 
problem is that if we have read some entries into the WALEntryBatch and 
increased the totalBufferUsed, and the totalBufferUsed is not subtracted after 
throwing EOF, all peers will eventually block completely.
   
   One of our customers seems to be consistently reaching this EOF problem, per 
below exception trace. 
   
   `WARN org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader: 
Encountered a malformed edit, seeking back to last good position in file, from 
31912404 to 31912265
   java.io.EOFException: EOF while reading 106 WAL KVs; started reading at 
31912338 and read up to 31912404
   at 
org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:397)
   at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:98)
   at 
org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:86)
   at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:262)
   at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:176)
   at 
org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:101)`
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on a diff in pull request #4756: HBASE-27354 EOF thrown by WALEntryStream causes replication blocking

2022-09-06 Thread GitBox


Apache9 commented on code in PR #4756:
URL: https://github.com/apache/hbase/pull/4756#discussion_r963691316


##
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/WALEntryStream.java:
##
@@ -255,15 +255,21 @@ private void dequeueCurrentLog() throws IOException {
* Returns whether the file is opened for writing.
*/
   private boolean readNextEntryAndRecordReaderPosition() throws IOException {
+long prePos = reader.getPosition();

Review Comment:
   Then I do not think the fix here is enough? We should decrease the 
totalBufferUsed when calling resetReader?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on a diff in pull request #4756: HBASE-27354 EOF thrown by WALEntryStream causes replication blocking

2022-09-06 Thread GitBox


Apache9 commented on code in PR #4756:
URL: https://github.com/apache/hbase/pull/4756#discussion_r963799161


##
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/WALEntryStream.java:
##
@@ -255,15 +255,21 @@ private void dequeueCurrentLog() throws IOException {
* Returns whether the file is opened for writing.
*/
   private boolean readNextEntryAndRecordReaderPosition() throws IOException {
+long prePos = reader.getPosition();

Review Comment:
   Checking the code, we have a special logic in ReplicationSourceWALReader for 
dealing with EOFException. Please see 
ReplicationSourceWALReader.handleEofException.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27328) Enforcer phase EvaluateBeanShell fails occasionally

2022-09-06 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17600813#comment-17600813
 ] 

Sean Busbey commented on HBASE-27328:
-

the license validator failing to run should fail the build, because it means 
our failsafe on obviously bad license info couldn't run.

the logging situation wrt multithreaded builds is pretty bad, sorry this 
particular issue is kind of a nightmare with it. Are we sure both the enforcer 
plugin and the plugin we use to run beanshell are threadsafe?

> Enforcer phase EvaluateBeanShell fails occasionally
> ---
>
> Key: HBASE-27328
> URL: https://issues.apache.org/jira/browse/HBASE-27328
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Nick Dimiduk
>Priority: Major
>
> For example, from 
> https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/412/General_20Nightly_20Build_20Report/
> {noformat}
> [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ 
> hbase-shaded-client-byo-hadoop ---
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.EvaluateBeanshell failed 
> with message:
> Couldn't evaluate condition: File license = new 
> File("/home/jenkins/jenkins-home/workspace/HBase_Nightly_branch-2.4/component/hbase-shaded/hbase-shaded-client/target/maven-shared-archive-resources/META-INF/LICENSE");
> // Beanshell does not support try-with-resources,
> // so we must close this scanner manually
> Scanner scanner = new Scanner(license);
> while (scanner.hasNextLine()) {
>   if (scanner.nextLine().startsWith("ERROR:")) {
> scanner.close();
> return false;
>   }
> }
> scanner.close();
> return true;
> [INFO] No sources to compile
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27328) Enforcer phase EvaluateBeanShell fails occasionally

2022-09-06 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17600815#comment-17600815
 ] 

Sean Busbey commented on HBASE-27328:
-

can we reproduce this with a `-X` on maven? the source for the beanshell 
evaluation should include the exception that caused the failure:

https://github.com/apache/maven-enforcer/blob/master/enforcer-rules/src/main/java/org/apache/maven/plugins/enforcer/EvaluateBeanshell.java#L114

> Enforcer phase EvaluateBeanShell fails occasionally
> ---
>
> Key: HBASE-27328
> URL: https://issues.apache.org/jira/browse/HBASE-27328
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Nick Dimiduk
>Priority: Major
>
> For example, from 
> https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/412/General_20Nightly_20Build_20Report/
> {noformat}
> [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ 
> hbase-shaded-client-byo-hadoop ---
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.EvaluateBeanshell failed 
> with message:
> Couldn't evaluate condition: File license = new 
> File("/home/jenkins/jenkins-home/workspace/HBase_Nightly_branch-2.4/component/hbase-shaded/hbase-shaded-client/target/maven-shared-archive-resources/META-INF/LICENSE");
> // Beanshell does not support try-with-resources,
> // so we must close this scanner manually
> Scanner scanner = new Scanner(license);
> while (scanner.hasNextLine()) {
>   if (scanner.nextLine().startsWith("ERROR:")) {
> scanner.close();
> return false;
>   }
> }
> scanner.close();
> return true;
> [INFO] No sources to compile
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache9 commented on pull request #4763: HBASE-27314 Make index block be customized and configured

2022-09-06 Thread GitBox


Apache9 commented on PR #4763:
URL: https://github.com/apache/hbase/pull/4763#issuecomment-1238273077

   What is the difference comparing to the PR for master?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27267) Delete causes timestamp to be negative

2022-09-06 Thread zhengsicheng (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17600816#comment-17600816
 ] 

zhengsicheng commented on HBASE-27267:
--

HBASE-2.3.4 merge  https://issues.apache.org/jira/browse/HBASE-26036  solve the 
problem。 

> Delete causes timestamp to be negative
> --
>
> Key: HBASE-27267
> URL: https://issues.apache.org/jira/browse/HBASE-27267
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Major
> Attachments: image-2022-08-11-17-10-00-389.png, screenshot-1.png
>
>
> When client-1.1.6 and server-2.3.4 there is a case where the batch delete 
> timestamp is negative
> #  1. RegionServer log message:
> {code:java}
> 2022-07-19 12:13:29,324 WARN  
> [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
>  hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
> KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
> 2022-07-19 12:13:29,324 WARN  
> [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
>  wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last 
> good position in file, from 1099261 to 1078224
> java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
> 1078317 and read up to 1099261
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
> Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
> ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
> at 
> org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346)
> at 
> org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717)
> at 
> org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81)
> at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68)
> at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387)
> ... 7 more
> {code}
> # 2. Debug WAL file ,found that the delete operation is caused
> {code:java}
> Sequence=365693989, table=tableA, region=148cedb7b8ca3145690800fd650e084d, at 
> write timestamp=Sat Jul 16 00:50:01 CST 2022
> 2022-07-22 22:09:43,244 ERROR [main] wal.WALPrettyPrinter: Timestamp is 
> negative row=rowkey1, column=d:act, timestamp=-4323977095312258207, 
> type=Delete
> {code}
> # 3. User use spark read/write hbase
> batchsize is 1
> {code:scala}
> def dataDeleteFromHbase(rdd: RDD[(String, String)], hbase_table: String, 
> hbase_instance: String, hbase_accesskey: String, accumulator: 
> LongAccumulator, buffersize: String, batchsize: Int): Unit = {
> rdd.foreachPartition(iterator => {
>   val partitionId = TaskContext.getPartitionId()
>   val conf = HBaseConfiguration.create()
>   val connection = SparkHbaseUtils.getconnection(conf)
>   val table = connection.getTable(TableName.valueOf(hbase_table))
>   var deleteList = new util.LinkedList[Delete]()
>   var count = 0
>   var batchCount = 0
>   while (iterator.hasNext) {
> val element = iterator.next
> val crc32 = new CRC32()
> crc32.update(s"${element._1}_${element._2}".getBytes())
> val crcArr = convertLow4bit2SmallEndan(crc32.getValue)
> val key = concat(DigestUtils.md5(s"${element._1}_${element._2}"), 
> crcArr)
> val delete = new Delete(key)
> deleteList.add(delete)
> count += 1
> if

[jira] [Commented] (HBASE-27328) Enforcer phase EvaluateBeanShell fails occasionally

2022-09-06 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17600817#comment-17600817
 ] 

Sean Busbey commented on HBASE-27328:
-

(the enforcer plugin has been marked threadsafe since 1.0.1 and it looks like 
the beanshell evaluator is threadsafe.)

> Enforcer phase EvaluateBeanShell fails occasionally
> ---
>
> Key: HBASE-27328
> URL: https://issues.apache.org/jira/browse/HBASE-27328
> Project: HBase
>  Issue Type: Task
>  Components: build
>Reporter: Nick Dimiduk
>Priority: Major
>
> For example, from 
> https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.4/412/General_20Nightly_20Build_20Report/
> {noformat}
> [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ 
> hbase-shaded-client-byo-hadoop ---
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.EvaluateBeanshell failed 
> with message:
> Couldn't evaluate condition: File license = new 
> File("/home/jenkins/jenkins-home/workspace/HBase_Nightly_branch-2.4/component/hbase-shaded/hbase-shaded-client/target/maven-shared-archive-resources/META-INF/LICENSE");
> // Beanshell does not support try-with-resources,
> // so we must close this scanner manually
> Scanner scanner = new Scanner(license);
> while (scanner.hasNextLine()) {
>   if (scanner.nextLine().startsWith("ERROR:")) {
> scanner.close();
> return false;
>   }
> }
> scanner.close();
> return true;
> [INFO] No sources to compile
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (HBASE-27267) Delete causes timestamp to be negative

2022-09-06 Thread zhengsicheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhengsicheng resolved HBASE-27267.
--
Fix Version/s: 2.4.5
   2.5.0
   3.0.0-alpha-1
   Resolution: Resolved

> Delete causes timestamp to be negative
> --
>
> Key: HBASE-27267
> URL: https://issues.apache.org/jira/browse/HBASE-27267
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Major
> Fix For: 2.4.5, 2.5.0, 3.0.0-alpha-1
>
> Attachments: image-2022-08-11-17-10-00-389.png, screenshot-1.png
>
>
> When client-1.1.6 and server-2.3.4 there is a case where the batch delete 
> timestamp is negative
> #  1. RegionServer log message:
> {code:java}
> 2022-07-19 12:13:29,324 WARN  
> [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
>  hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
> KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
> 2022-07-19 12:13:29,324 WARN  
> [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
>  wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last 
> good position in file, from 1099261 to 1078224
> java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
> 1078317 and read up to 1099261
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
> Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
> ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
> at 
> org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346)
> at 
> org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717)
> at 
> org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81)
> at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68)
> at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387)
> ... 7 more
> {code}
> # 2. Debug WAL file ,found that the delete operation is caused
> {code:java}
> Sequence=365693989, table=tableA, region=148cedb7b8ca3145690800fd650e084d, at 
> write timestamp=Sat Jul 16 00:50:01 CST 2022
> 2022-07-22 22:09:43,244 ERROR [main] wal.WALPrettyPrinter: Timestamp is 
> negative row=rowkey1, column=d:act, timestamp=-4323977095312258207, 
> type=Delete
> {code}
> # 3. User use spark read/write hbase
> batchsize is 1
> {code:scala}
> def dataDeleteFromHbase(rdd: RDD[(String, String)], hbase_table: String, 
> hbase_instance: String, hbase_accesskey: String, accumulator: 
> LongAccumulator, buffersize: String, batchsize: Int): Unit = {
> rdd.foreachPartition(iterator => {
>   val partitionId = TaskContext.getPartitionId()
>   val conf = HBaseConfiguration.create()
>   val connection = SparkHbaseUtils.getconnection(conf)
>   val table = connection.getTable(TableName.valueOf(hbase_table))
>   var deleteList = new util.LinkedList[Delete]()
>   var count = 0
>   var batchCount = 0
>   while (iterator.hasNext) {
> val element = iterator.next
> val crc32 = new CRC32()
> crc32.update(s"${element._1}_${element._2}".getBytes())
> val crcArr = convertLow4bit2SmallEndan(crc32.getValue)
> val key = concat(DigestUtils.md5(s"${element._1}_${element._2}"), 
> crcArr)
> val delete = new Delete(key)
> deleteList.add(delete)
> count 

[GitHub] [hbase] Apache9 merged pull request #4761: HBASE-27340 Artifacts with resolved profiles (#4740)

2022-09-06 Thread GitBox


Apache9 merged PR #4761:
URL: https://github.com/apache/hbase/pull/4761


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache9 commented on pull request #4761: HBASE-27340 Artifacts with resolved profiles (#4740)

2022-09-06 Thread GitBox


Apache9 commented on PR #4761:
URL: https://github.com/apache/hbase/pull/4761#issuecomment-1238282066

   Let's get this done :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4762: HBASE-27215 Add support for sync replication

2022-09-06 Thread GitBox


Apache-HBase commented on PR #4762:
URL: https://github.com/apache/hbase/pull/4762#issuecomment-1238282432

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 51s |  Docker mode activated.  |
   | -0 :warning: |  yetus  |   0m  2s |  Unprocessed flag(s): 
--brief-report-file --spotbugs-strict-precheck --whitespace-eol-ignore-list 
--whitespace-tabs-ignore-list --quick-hadoopcheck  |
   ||| _ Prechecks _ |
   ||| _ HBASE-27109/table_based_rqs Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 56s |  HBASE-27109/table_based_rqs 
passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  HBASE-27109/table_based_rqs 
passed  |
   | +1 :green_heart: |  shadedjars  |   4m  7s |  branch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  HBASE-27109/table_based_rqs 
passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m 31s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 43s |  the patch passed  |
   | +1 :green_heart: |  shadedjars  |   3m 56s |  patch has no errors when 
building our shaded downstream artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 206m  2s |  hbase-server in the patch passed.  
|
   |  |   | 225m 17s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4762/4/artifact/yetus-jdk11-hadoop3-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4762 |
   | Optional Tests | javac javadoc unit shadedjars compile |
   | uname | Linux a4b8f3869e18 5.4.0-1071-aws #76~18.04.1-Ubuntu SMP Mon Mar 
28 17:49:57 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | HBASE-27109/table_based_rqs / 49a9e82051 |
   | Default Java | AdoptOpenJDK-11.0.10+9 |
   |  Test Results | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4762/4/testReport/
 |
   | Max. process+thread count | 2806 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4762/4/console 
|
   | versions | git=2.17.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (HBASE-27340) Artifacts with resolved profiles

2022-09-06 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-27340.
---
Fix Version/s: 2.6.0
   2.5.1
   3.0.0-alpha-4
 Hadoop Flags: Reviewed
 Assignee: Michael Stack
   Resolution: Fixed

Merged to branch-2.5+.

Thanks [~stack]! Please fill the release note about this great change.

> Artifacts with resolved profiles
> 
>
> Key: HBASE-27340
> URL: https://issues.apache.org/jira/browse/HBASE-27340
> Project: HBase
>  Issue Type: Brainstorming
>  Components: build, pom
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
> Fix For: 2.6.0, 2.5.1, 3.0.0-alpha-4
>
>
> Brainstorming/Discussion. The maven-flatten-plugin makes it so published poms 
> are 'flattened'. The poms contain the runtime-necessary dependencies only, 
> 'build' and 'test' dependencies and plugins are dropped, versions are 
> resolved out of properties, and so on. The published poms are the barebones 
> minimum needed to run.
> With a switch, the plugin can also make it so the produced poms have all 
> profiles 'resolved' – making it so the produced poms have all resolved 
> hadoop2 or hadoop3 dependencies baked-in – based off which profile we used 
> building.
> (I've been interested in this flattening technique since I ran into a 
> downstreamer using hbase from a gradle build. Gradle does not respect 
> profiles. You can't specify that the gradle build pull in hbase with hadoop3 
> dependencies using 'profiles'. I notice too our [~gjacoby] , [~apurtell] et 
> al. up on the dev list talking about making a hadoop3 set of artifacts...who 
> might be interested in this direction).
> The attached patch adds the flatten plugin so folks can take a look-see. It 
> uncovers some locations where our versioning on dependencies is not explicit. 
> The workaround practiced here was adding hadoop2/hadoop3 profiles into 
> sub-modules that were missing them or moving problematic dependencies that 
> were outside of profiles under profiles in sub-modules that had them already. 
> For the latter, if the dependency specified excludes, the excludes were moved 
> up to the parent pom profile (parent pom profiles have dependencyManagement 
> sections... sub-modules have explicit dependency mentions... checks with 
> dependency:tree seem to show excludes continue to be effective).
> This is the switch that flattens profiles:   
> true
> This is the sort of complaint we had when the flatten plugin was having 
> trouble figure dependency versions – particularly hadoop versions
> {{[ERROR] Failed to execute goal 
> org.codehaus.mojo:flatten-maven-plugin:1.3.0:flatten (flatten) on project 
> hbase-hadoop2-compat: 3 problems were encountered while building the 
> effective model for org.apache.hbase:hbase-hadoop2-compat:2.5.1-SNAPSHOT}}
> {{[ERROR] [WARNING] 'build.plugins.plugin.version' for 
> org.codehaus.mojo:flatten-maven-plugin is missing. @}}
> {{[ERROR] [ERROR] 'dependencies.dependency.version' for 
> org.apache.hadoop:hadoop-mapreduce-client-core:jar is missing. @}}
> {{[ERROR] [ERROR] 'dependencies.dependency.version' for 
> javax.activation:javax.activation-api:jar is missing. @}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (HBASE-27267) Delete causes timestamp to be negative

2022-09-06 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reopened HBASE-27267:
---

> Delete causes timestamp to be negative
> --
>
> Key: HBASE-27267
> URL: https://issues.apache.org/jira/browse/HBASE-27267
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Assignee: zhengsicheng
>Priority: Major
> Fix For: 3.0.0-alpha-1, 2.5.0, 2.4.5
>
> Attachments: image-2022-08-11-17-10-00-389.png, screenshot-1.png
>
>
> When client-1.1.6 and server-2.3.4 there is a case where the batch delete 
> timestamp is negative
> #  1. RegionServer log message:
> {code:java}
> 2022-07-19 12:13:29,324 WARN  
> [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
>  hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
> KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
> 2022-07-19 12:13:29,324 WARN  
> [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
>  wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last 
> good position in file, from 1099261 to 1078224
> java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
> 1078317 and read up to 1099261
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
> Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
> ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
> at 
> org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346)
> at 
> org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717)
> at 
> org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81)
> at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68)
> at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387)
> ... 7 more
> {code}
> # 2. Debug WAL file ,found that the delete operation is caused
> {code:java}
> Sequence=365693989, table=tableA, region=148cedb7b8ca3145690800fd650e084d, at 
> write timestamp=Sat Jul 16 00:50:01 CST 2022
> 2022-07-22 22:09:43,244 ERROR [main] wal.WALPrettyPrinter: Timestamp is 
> negative row=rowkey1, column=d:act, timestamp=-4323977095312258207, 
> type=Delete
> {code}
> # 3. User use spark read/write hbase
> batchsize is 1
> {code:scala}
> def dataDeleteFromHbase(rdd: RDD[(String, String)], hbase_table: String, 
> hbase_instance: String, hbase_accesskey: String, accumulator: 
> LongAccumulator, buffersize: String, batchsize: Int): Unit = {
> rdd.foreachPartition(iterator => {
>   val partitionId = TaskContext.getPartitionId()
>   val conf = HBaseConfiguration.create()
>   val connection = SparkHbaseUtils.getconnection(conf)
>   val table = connection.getTable(TableName.valueOf(hbase_table))
>   var deleteList = new util.LinkedList[Delete]()
>   var count = 0
>   var batchCount = 0
>   while (iterator.hasNext) {
> val element = iterator.next
> val crc32 = new CRC32()
> crc32.update(s"${element._1}_${element._2}".getBytes())
> val crcArr = convertLow4bit2SmallEndan(crc32.getValue)
> val key = concat(DigestUtils.md5(s"${element._1}_${element._2}"), 
> crcArr)
> val delete = new Delete(key)
> deleteList.add(delete)
> count += 1
> if (count % batchsize.toInt == 0) {
>   batchCount = batchCount + 1
>   try {
>   

[jira] [Resolved] (HBASE-27267) Delete causes timestamp to be negative

2022-09-06 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-27267.
---
Fix Version/s: (was: 3.0.0-alpha-1)
   (was: 2.5.0)
   (was: 2.4.5)
 Assignee: (was: zhengsicheng)
   Resolution: Not A Problem

> Delete causes timestamp to be negative
> --
>
> Key: HBASE-27267
> URL: https://issues.apache.org/jira/browse/HBASE-27267
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.3.4
>Reporter: zhengsicheng
>Priority: Major
> Attachments: image-2022-08-11-17-10-00-389.png, screenshot-1.png
>
>
> When client-1.1.6 and server-2.3.4 there is a case where the batch delete 
> timestamp is negative
> #  1. RegionServer log message:
> {code:java}
> 2022-07-19 12:13:29,324 WARN  
> [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
>  hbase.KeyValueUtil: Timestamp cannot be negative, ts=-4323977095312258207, 
> KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
> 2022-07-19 12:13:29,324 WARN  
> [RS_OPEN_REGION-regionserver/HBASE-HOSTNAME1:16020-1.replicationSource.wal-reader.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.HBASE-HOSTNAME1.local%2C16020%2C1657184880284.regiongroup-2,clusterB]
>  wal.ProtobufLogReader: Encountered a malformed edit, seeking back to last 
> good position in file, from 1099261 to 1078224
> java.io.EOFException: EOF  while reading 660 WAL KVs; started reading at 
> 1078317 and read up to 1099261
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:403)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:97)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ReaderBase.next(ReaderBase.java:85)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.readNextEntryAndRecordReaderPosition(WALEntryStream.java:264)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.tryAdvanceEntry(WALEntryStream.java:178)
> at 
> org.apache.hadoop.hbase.replication.regionserver.WALEntryStream.hasNext(WALEntryStream.java:103)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.readWALEntries(ReplicationSourceWALReader.java:230)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceWALReader.run(ReplicationSourceWALReader.java:145)
> Caused by: java.lang.IllegalArgumentException: Timestamp cannot be negative, 
> ts=-4323977095312258207, KeyValueBytesHex=\x00\x00\x00, offset=0, length=40
> at 
> org.apache.hadoop.hbase.KeyValueUtil.checkKeyValueBytes(KeyValueUtil.java:612)
> at org.apache.hadoop.hbase.KeyValue.(KeyValue.java:346)
> at 
> org.apache.hadoop.hbase.KeyValueUtil.createKeyValueFromInputStream(KeyValueUtil.java:717)
> at 
> org.apache.hadoop.hbase.codec.KeyValueCodecWithTags$KeyValueDecoder.parseCell(KeyValueCodecWithTags.java:81)
> at org.apache.hadoop.hbase.codec.BaseDecoder.advance(BaseDecoder.java:68)
> at org.apache.hadoop.hbase.wal.WALEdit.readFromCells(WALEdit.java:276)
> at 
> org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader.readNext(ProtobufLogReader.java:387)
> ... 7 more
> {code}
> # 2. Debug WAL file ,found that the delete operation is caused
> {code:java}
> Sequence=365693989, table=tableA, region=148cedb7b8ca3145690800fd650e084d, at 
> write timestamp=Sat Jul 16 00:50:01 CST 2022
> 2022-07-22 22:09:43,244 ERROR [main] wal.WALPrettyPrinter: Timestamp is 
> negative row=rowkey1, column=d:act, timestamp=-4323977095312258207, 
> type=Delete
> {code}
> # 3. User use spark read/write hbase
> batchsize is 1
> {code:scala}
> def dataDeleteFromHbase(rdd: RDD[(String, String)], hbase_table: String, 
> hbase_instance: String, hbase_accesskey: String, accumulator: 
> LongAccumulator, buffersize: String, batchsize: Int): Unit = {
> rdd.foreachPartition(iterator => {
>   val partitionId = TaskContext.getPartitionId()
>   val conf = HBaseConfiguration.create()
>   val connection = SparkHbaseUtils.getconnection(conf)
>   val table = connection.getTable(TableName.valueOf(hbase_table))
>   var deleteList = new util.LinkedList[Delete]()
>   var count = 0
>   var batchCount = 0
>   while (iterator.hasNext) {
> val element = iterator.next
> val crc32 = new CRC32()
> crc32.update(s"${element._1}_${element._2}".getBytes())
> val crcArr = convertLow4bit2SmallEndan(crc32.getValue)
> val key = concat(DigestUtils.md5(s"${element._1}_${element._2}"), 
> crcArr)
> val delete = new Delete(key)
> deleteList.add(delete)
> count += 1
>

[jira] [Updated] (HBASE-27340) Artifacts with resolved profiles

2022-09-06 Thread Michael Stack (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Stack updated HBASE-27340:
--
Environment: Published poms now contain runtime dependencies only; build 
and test time dependencies are stripped. Profiles are also now resolved and 
in-lined at publish time. This removes the need/ability of downstreamers 
shaping hbase dependencies via enable/disable of hbase profile settings 
(Implication is that now the hbase project publishes artifacts for hadoop2 and 
for hadoop3, and so on).

> Artifacts with resolved profiles
> 
>
> Key: HBASE-27340
> URL: https://issues.apache.org/jira/browse/HBASE-27340
> Project: HBase
>  Issue Type: Brainstorming
>  Components: build, pom
> Environment: Published poms now contain runtime dependencies only; 
> build and test time dependencies are stripped. Profiles are also now resolved 
> and in-lined at publish time. This removes the need/ability of downstreamers 
> shaping hbase dependencies via enable/disable of hbase profile settings 
> (Implication is that now the hbase project publishes artifacts for hadoop2 
> and for hadoop3, and so on).
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
> Fix For: 2.6.0, 2.5.1, 3.0.0-alpha-4
>
>
> Brainstorming/Discussion. The maven-flatten-plugin makes it so published poms 
> are 'flattened'. The poms contain the runtime-necessary dependencies only, 
> 'build' and 'test' dependencies and plugins are dropped, versions are 
> resolved out of properties, and so on. The published poms are the barebones 
> minimum needed to run.
> With a switch, the plugin can also make it so the produced poms have all 
> profiles 'resolved' – making it so the produced poms have all resolved 
> hadoop2 or hadoop3 dependencies baked-in – based off which profile we used 
> building.
> (I've been interested in this flattening technique since I ran into a 
> downstreamer using hbase from a gradle build. Gradle does not respect 
> profiles. You can't specify that the gradle build pull in hbase with hadoop3 
> dependencies using 'profiles'. I notice too our [~gjacoby] , [~apurtell] et 
> al. up on the dev list talking about making a hadoop3 set of artifacts...who 
> might be interested in this direction).
> The attached patch adds the flatten plugin so folks can take a look-see. It 
> uncovers some locations where our versioning on dependencies is not explicit. 
> The workaround practiced here was adding hadoop2/hadoop3 profiles into 
> sub-modules that were missing them or moving problematic dependencies that 
> were outside of profiles under profiles in sub-modules that had them already. 
> For the latter, if the dependency specified excludes, the excludes were moved 
> up to the parent pom profile (parent pom profiles have dependencyManagement 
> sections... sub-modules have explicit dependency mentions... checks with 
> dependency:tree seem to show excludes continue to be effective).
> This is the switch that flattens profiles:   
> true
> This is the sort of complaint we had when the flatten plugin was having 
> trouble figure dependency versions – particularly hadoop versions
> {{[ERROR] Failed to execute goal 
> org.codehaus.mojo:flatten-maven-plugin:1.3.0:flatten (flatten) on project 
> hbase-hadoop2-compat: 3 problems were encountered while building the 
> effective model for org.apache.hbase:hbase-hadoop2-compat:2.5.1-SNAPSHOT}}
> {{[ERROR] [WARNING] 'build.plugins.plugin.version' for 
> org.codehaus.mojo:flatten-maven-plugin is missing. @}}
> {{[ERROR] [ERROR] 'dependencies.dependency.version' for 
> org.apache.hadoop:hadoop-mapreduce-client-core:jar is missing. @}}
> {{[ERROR] [ERROR] 'dependencies.dependency.version' for 
> javax.activation:javax.activation-api:jar is missing. @}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27359) Publish binaries and maven artifacts for both hadoop2 and hadoop3

2022-09-06 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-27359:
-

 Summary: Publish binaries and maven artifacts for both hadoop2 and 
hadoop3
 Key: HBASE-27359
 URL: https://issues.apache.org/jira/browse/HBASE-27359
 Project: HBase
  Issue Type: Improvement
  Components: build, community, pom
Reporter: Duo Zhang


As per the discussion in this thread

https://lists.apache.org/thread/y05gspk4mnxsz6nk7hc5ots8wt50366b

And this is possible after we land HBASE-27340.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27359) Publish binaries and maven artifacts for both hadoop2 and hadoop3

2022-09-06 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-27359:
--
Component/s: scripts

> Publish binaries and maven artifacts for both hadoop2 and hadoop3
> -
>
> Key: HBASE-27359
> URL: https://issues.apache.org/jira/browse/HBASE-27359
> Project: HBase
>  Issue Type: Improvement
>  Components: build, community, pom, scripts
>Reporter: Duo Zhang
>Priority: Major
>
> As per the discussion in this thread
> https://lists.apache.org/thread/y05gspk4mnxsz6nk7hc5ots8wt50366b
> And this is possible after we land HBASE-27340.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] huaxiangsun commented on a diff in pull request #4664: HBASE-27250 MasterRpcService#setRegionStateInMeta does not support re…

2022-09-06 Thread GitBox


huaxiangsun commented on code in PR #4664:
URL: https://github.com/apache/hbase/pull/4664#discussion_r963900209


##
hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:
##
@@ -2479,32 +2481,42 @@ public SetRegionStateInMetaResponse 
setRegionStateInMeta(RpcController controlle
   for (RegionSpecifierAndState s : request.getStatesList()) {
 RegionSpecifier spec = s.getRegionSpecifier();
 String encodedName;
+RegionInfo info;
+int replicaId;
 if (spec.getType() == RegionSpecifierType.ENCODED_REGION_NAME) {
-  encodedName = spec.getValue().toStringUtf8();
+  info = this.server.getAssignmentManager()
+
.getRegionInfoFromEncodedRegionName(spec.getValue().toStringUtf8());
 } else {
   // TODO: actually, a full region name can save a lot on meta scan, 
improve later.
-  encodedName = 
RegionInfo.encodeRegionName(spec.getValue().toByteArray());
+  info = 
CatalogFamilyFormat.parseRegionInfoFromRegionName(spec.getValue().toByteArray());
 }
-RegionInfo info = 
this.server.getAssignmentManager().loadRegionFromMeta(encodedName);
-LOG.trace("region info loaded from meta table: {}", info);
+replicaId = info.getReplicaId();
+LOG.trace("region info", info);
 RegionState prevState =
   
this.server.getAssignmentManager().getRegionStates().getRegionState(info);
 RegionState.State newState = RegionState.State.convert(s.getState());
 LOG.info("{} set region={} state from {} to {}", 
server.getClientIdAuditPrefix(), info,
   prevState.getState(), newState);
-Put metaPut =
-  MetaTableAccessor.makePutFromRegionInfo(info, 
EnvironmentEdgeManager.currentTime());
-metaPut.addColumn(HConstants.CATALOG_FAMILY, 
HConstants.STATE_QUALIFIER,
-  Bytes.toBytes(newState.name()));
-List putList = new ArrayList<>();
-putList.add(metaPut);
-MetaTableAccessor.putsToMetaTable(this.server.getConnection(), 
putList);
-// Loads from meta again to refresh AM cache with the new region state
-this.server.getAssignmentManager().loadRegionFromMeta(encodedName);
-
builder.addStates(RegionSpecifierAndState.newBuilder().setRegionSpecifier(spec)
-  .setState(prevState.getState().convert()));
+// If state does not change, no need to set.
+if (prevState.getState() != newState) {
+  if (replicaId > RegionInfo.DEFAULT_REPLICA_ID) {
+// If it is a non-primary replica region, use primary region as 
the key.
+info = 
RegionInfoBuilder.newBuilder(info).setReplicaId(RegionInfo.DEFAULT_REPLICA_ID)

Review Comment:
   Sorry for coming back late. prevState is from AM. I added a sanity check to 
make sure that the passed in regionName or encodedName is valid.
   
   ```
   if (prevState == null) {
 throw new ServiceException("Region {} does not exist" + info);
   }
   
   ``` 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-27340) Artifacts with resolved profiles

2022-09-06 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-27340:
--
Release Note: Published poms now contain runtime dependencies only; build 
and test time dependencies are stripped. Profiles are also now resolved and 
in-lined at publish time. This removes the need/ability of downstreamers 
shaping hbase dependencies via enable/disable of hbase profile settings 
(Implication is that now the hbase project publishes artifacts for hadoop2 and 
for hadoop3, and so on).
 Environment: (was: Published poms now contain runtime dependencies 
only; build and test time dependencies are stripped. Profiles are also now 
resolved and in-lined at publish time. This removes the need/ability of 
downstreamers shaping hbase dependencies via enable/disable of hbase profile 
settings (Implication is that now the hbase project publishes artifacts for 
hadoop2 and for hadoop3, and so on).)

> Artifacts with resolved profiles
> 
>
> Key: HBASE-27340
> URL: https://issues.apache.org/jira/browse/HBASE-27340
> Project: HBase
>  Issue Type: Brainstorming
>  Components: build, pom
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
> Fix For: 2.6.0, 2.5.1, 3.0.0-alpha-4
>
>
> Brainstorming/Discussion. The maven-flatten-plugin makes it so published poms 
> are 'flattened'. The poms contain the runtime-necessary dependencies only, 
> 'build' and 'test' dependencies and plugins are dropped, versions are 
> resolved out of properties, and so on. The published poms are the barebones 
> minimum needed to run.
> With a switch, the plugin can also make it so the produced poms have all 
> profiles 'resolved' – making it so the produced poms have all resolved 
> hadoop2 or hadoop3 dependencies baked-in – based off which profile we used 
> building.
> (I've been interested in this flattening technique since I ran into a 
> downstreamer using hbase from a gradle build. Gradle does not respect 
> profiles. You can't specify that the gradle build pull in hbase with hadoop3 
> dependencies using 'profiles'. I notice too our [~gjacoby] , [~apurtell] et 
> al. up on the dev list talking about making a hadoop3 set of artifacts...who 
> might be interested in this direction).
> The attached patch adds the flatten plugin so folks can take a look-see. It 
> uncovers some locations where our versioning on dependencies is not explicit. 
> The workaround practiced here was adding hadoop2/hadoop3 profiles into 
> sub-modules that were missing them or moving problematic dependencies that 
> were outside of profiles under profiles in sub-modules that had them already. 
> For the latter, if the dependency specified excludes, the excludes were moved 
> up to the parent pom profile (parent pom profiles have dependencyManagement 
> sections... sub-modules have explicit dependency mentions... checks with 
> dependency:tree seem to show excludes continue to be effective).
> This is the switch that flattens profiles:   
> true
> This is the sort of complaint we had when the flatten plugin was having 
> trouble figure dependency versions – particularly hadoop versions
> {{[ERROR] Failed to execute goal 
> org.codehaus.mojo:flatten-maven-plugin:1.3.0:flatten (flatten) on project 
> hbase-hadoop2-compat: 3 problems were encountered while building the 
> effective model for org.apache.hbase:hbase-hadoop2-compat:2.5.1-SNAPSHOT}}
> {{[ERROR] [WARNING] 'build.plugins.plugin.version' for 
> org.codehaus.mojo:flatten-maven-plugin is missing. @}}
> {{[ERROR] [ERROR] 'dependencies.dependency.version' for 
> org.apache.hadoop:hadoop-mapreduce-client-core:jar is missing. @}}
> {{[ERROR] [ERROR] 'dependencies.dependency.version' for 
> javax.activation:javax.activation-api:jar is missing. @}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27340) Artifacts with resolved profiles

2022-09-06 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17600870#comment-17600870
 ] 

Michael Stack commented on HBASE-27340:
---

(Thanks for fixing 'Environment' vs 'Release Note' [~zhangduo] )

> Artifacts with resolved profiles
> 
>
> Key: HBASE-27340
> URL: https://issues.apache.org/jira/browse/HBASE-27340
> Project: HBase
>  Issue Type: Brainstorming
>  Components: build, pom
>Reporter: Michael Stack
>Assignee: Michael Stack
>Priority: Minor
> Fix For: 2.6.0, 2.5.1, 3.0.0-alpha-4
>
>
> Brainstorming/Discussion. The maven-flatten-plugin makes it so published poms 
> are 'flattened'. The poms contain the runtime-necessary dependencies only, 
> 'build' and 'test' dependencies and plugins are dropped, versions are 
> resolved out of properties, and so on. The published poms are the barebones 
> minimum needed to run.
> With a switch, the plugin can also make it so the produced poms have all 
> profiles 'resolved' – making it so the produced poms have all resolved 
> hadoop2 or hadoop3 dependencies baked-in – based off which profile we used 
> building.
> (I've been interested in this flattening technique since I ran into a 
> downstreamer using hbase from a gradle build. Gradle does not respect 
> profiles. You can't specify that the gradle build pull in hbase with hadoop3 
> dependencies using 'profiles'. I notice too our [~gjacoby] , [~apurtell] et 
> al. up on the dev list talking about making a hadoop3 set of artifacts...who 
> might be interested in this direction).
> The attached patch adds the flatten plugin so folks can take a look-see. It 
> uncovers some locations where our versioning on dependencies is not explicit. 
> The workaround practiced here was adding hadoop2/hadoop3 profiles into 
> sub-modules that were missing them or moving problematic dependencies that 
> were outside of profiles under profiles in sub-modules that had them already. 
> For the latter, if the dependency specified excludes, the excludes were moved 
> up to the parent pom profile (parent pom profiles have dependencyManagement 
> sections... sub-modules have explicit dependency mentions... checks with 
> dependency:tree seem to show excludes continue to be effective).
> This is the switch that flattens profiles:   
> true
> This is the sort of complaint we had when the flatten plugin was having 
> trouble figure dependency versions – particularly hadoop versions
> {{[ERROR] Failed to execute goal 
> org.codehaus.mojo:flatten-maven-plugin:1.3.0:flatten (flatten) on project 
> hbase-hadoop2-compat: 3 problems were encountered while building the 
> effective model for org.apache.hbase:hbase-hadoop2-compat:2.5.1-SNAPSHOT}}
> {{[ERROR] [WARNING] 'build.plugins.plugin.version' for 
> org.codehaus.mojo:flatten-maven-plugin is missing. @}}
> {{[ERROR] [ERROR] 'dependencies.dependency.version' for 
> org.apache.hadoop:hadoop-mapreduce-client-core:jar is missing. @}}
> {{[ERROR] [ERROR] 'dependencies.dependency.version' for 
> javax.activation:javax.activation-api:jar is missing. @}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] Apache-HBase commented on pull request #4664: HBASE-27250 MasterRpcService#setRegionStateInMeta does not support re…

2022-09-06 Thread GitBox


Apache-HBase commented on PR #4664:
URL: https://github.com/apache/hbase/pull/4664#issuecomment-1238399656

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 50s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   4m 53s |  master passed  |
   | +1 :green_heart: |  compile  |   2m 23s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 38s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 16s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   2m  8s |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 14s |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 14s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   8m  4s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.4.  |
   | +1 :green_heart: |  spotless  |   0m 36s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   1m 19s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m  9s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  30m 22s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4664/3/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4664 |
   | Optional Tests | dupname asflicense javac spotbugs hadoopcheck hbaseanti 
spotless checkstyle compile |
   | uname | Linux 45b566247882 5.4.0-1081-aws #88~18.04.1-Ubuntu SMP Thu Jun 
23 16:29:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 1b5403cf7d |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 64 (vs. ulimit of 3) |
   | modules | C: hbase-server U: hbase-server |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4664/3/console 
|
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] Apache-HBase commented on pull request #4726: HBASE-27313: Persist list of Hfiles names for which prefetch is done

2022-09-06 Thread GitBox


Apache-HBase commented on PR #4726:
URL: https://github.com/apache/hbase/pull/4726#issuecomment-1238485127

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 41s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +0 :ok: |  prototool  |   0m  0s |  prototool was not available.  |
   | +1 :green_heart: |  hbaseanti  |   0m  0s |  Patch does not have any 
anti-patterns.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   ||| _ master Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 49s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |   2m 14s |  master passed  |
   | +1 :green_heart: |  compile  |   3m  5s |  master passed  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  master passed  |
   | +1 :green_heart: |  spotless  |   0m 36s |  branch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   3m 53s |  master passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m  9s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  4s |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m  8s |  the patch passed  |
   | +1 :green_heart: |  cc  |   3m  8s |  the patch passed  |
   | +1 :green_heart: |  javac  |   3m  8s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  hadoopcheck  |   7m 51s |  Patch does not cause any 
errors with Hadoop 3.2.4 3.3.4.  |
   | +1 :green_heart: |  hbaseprotoc  |   1m  5s |  the patch passed  |
   | +1 :green_heart: |  spotless  |   0m 36s |  patch has no errors when 
running spotless:check.  |
   | +1 :green_heart: |  spotbugs  |   4m  7s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 14s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  37m  9s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4726/6/artifact/yetus-general-check/output/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/4726 |
   | JIRA Issue | HBASE-27313 |
   | Optional Tests | dupname asflicense cc hbaseprotoc spotless prototool 
javac spotbugs hadoopcheck hbaseanti checkstyle compile |
   | uname | Linux 282e1f17db19 5.4.0-1081-aws #88~18.04.1-Ubuntu SMP Thu Jun 
23 16:29:17 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/hbase-personality.sh |
   | git revision | master / 1b5403cf7d |
   | Default Java | AdoptOpenJDK-1.8.0_282-b08 |
   | Max. process+thread count | 64 (vs. ulimit of 3) |
   | modules | C: hbase-protocol-shaded hbase-server U: . |
   | Console output | 
https://ci-hbase.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-4726/6/console 
|
   | versions | git=2.17.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-27280) Add mutual authentication support to TLS

2022-09-06 Thread Bryan Beaudreault (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Beaudreault updated HBASE-27280:
--
Labels: patch-available security ssl tls  (was: patch-available)

> Add mutual authentication support to TLS
> 
>
> Key: HBASE-27280
> URL: https://issues.apache.org/jira/browse/HBASE-27280
> Project: HBase
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Major
>  Labels: patch-available, security, ssl, tls
> Fix For: 2.6.0
>
>
> With HBASE-2 we now have native TLS on server and client. By default 
> clients validate server certificate on handshake. This issue adds server 
> authentication of clients. We can also add support for custom rules, such as 
> cert CommonName validation.
> I've already got a POC running of this, so assigning to me



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27326) Add validation of request user and groups from TLS certificate

2022-09-06 Thread Bryan Beaudreault (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Beaudreault updated HBASE-27326:
--
Labels: security ssl tls  (was: )

> Add validation of request user and groups from TLS certificate
> --
>
> Key: HBASE-27326
> URL: https://issues.apache.org/jira/browse/HBASE-27326
> Project: HBase
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Major
>  Labels: security, ssl, tls
> Fix For: 2.6.0
>
>
> When using mTLS for client authentication, we can allow the user to configure 
> certain certificate fields as a means for validating the passed username on 
> the ConnectionHeader. We can further look to inject groups for the user into 
> the request context, which can be used for downstream authz in (for example) 
> AuthManager/AccessChecker/etc.
> I would propose two new configs:
> {code:java}
> 
>   hbase.rpc.tls.certificate.username.oid
>   
>   When specified and TLS enabled, the client's SSL certificate 
> will be inspected for an OID of this value. A value must be found and the 
> value must match the username passed in the ConnectionHeader. For example, 
> can be set to "CN" and we will use the CommonName of the certificate to 
> validate the username.
> 
> 
>   hbase.rpc.tls.certificate.group.oid
>   
>   When specified and TLS enabled, the client's SSL certificate 
> will be inspected for OIDs of this value. If one or more values are found, 
> they will be used as the user's groups for use in hbase authz.
> {code}
> I think this would only apply when AuthenticationMethod is SIMPLE (no 
> kerberos).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase-filesystem] apurtell commented on a diff in pull request #35: HBASE-26483. [HBOSS] add support for createFile() and openFile(path)

2022-09-06 Thread GitBox


apurtell commented on code in PR #35:
URL: https://github.com/apache/hbase-filesystem/pull/35#discussion_r964056519


##
hbase-oss/src/test/resources/log4j.properties:
##
@@ -18,4 +18,7 @@ log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
 log4j.appender.stdout.layout.ConversionPattern=%d{ISO8601} [%t] %-5p %c{2} 
(%F:%M(%L)) - %m%n
 log4j.logger.org.apache.hadoop=DEBUG
 log4j.logger.org.apache.hadoop.metrics2=WARN
-log4j.logger.org.apache.hadoop.fs=WARN
\ No newline at end of file
+log4j.logger.org.apache.hadoop.fs=WARN
+log4j.org.apache.hadoop.util=WARN
+log4j.logger.org.apache.hadoop.fs.s3a=INFO
+log4j.logger.org.apache.hadoop.hbase.oss=DEBUG

Review Comment:
   The DEBUG setting for log4j.logger.org.apache.hadoop encompasses this 
already, but its' fine.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase-filesystem] apurtell commented on pull request #35: HBASE-26483. [HBOSS] add support for createFile() and openFile(path)

2022-09-06 Thread GitBox


apurtell commented on PR #35:
URL: https://github.com/apache/hbase-filesystem/pull/35#issuecomment-1238533005

   lgtm
   
   Some new implementation complexity but it provides a useful wrapper.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hbase] bbeaudreault merged pull request #4638: HBASE-27224 HFile tool statistic sampling produces misleading results

2022-09-06 Thread GitBox


bbeaudreault merged PR #4638:
URL: https://github.com/apache/hbase/pull/4638


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (HBASE-27224) HFile tool statistic sampling produces misleading results

2022-09-06 Thread Bryan Beaudreault (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Beaudreault updated HBASE-27224:
--
Fix Version/s: 2.5.1
   3.0.0-alpha-4
 Release Note: Fixes HFilePrettyPrinter's calculation of min and max size 
for an HFile so that it will truly be the min and max for the whole file. 
Previously was based on just a sampling, as with the histograms. Additionally 
adds a new argument to the tool '-d' which prints detailed range counts for 
each summary. The range counts give you the exact count of rows/cells that fall 
within the pre-defined ranges, useful for giving more detailed insight into 
outliers.
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks for review [~clayb] and [~zhangduo]!

> HFile tool statistic sampling produces misleading results
> -
>
> Key: HBASE-27224
> URL: https://issues.apache.org/jira/browse/HBASE-27224
> Project: HBase
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Major
>  Labels: patch-available
> Fix For: 2.5.1, 3.0.0-alpha-4
>
>
> HFile tool uses codahale metrics for collecting statistics about key/values 
> in an HFile. We recently had a case where the statistics printed out that the 
> max row size was only 25k. This was confusing because I was seeing bucket 
> cache allocation failures for blocks as large as 1.5mb. 
> Digging in, I was able to find the large row using the "-p" argument (which 
> was obviously very verbose). Once I found the row, I saw the vlen was listed 
> as ~1.5mb which made much more sense.
> First thing I notice here is that default codahale metrics histogram is using 
> ExponentiallyDecayingReservoir. This probably makes sense for a long-lived 
> histogram, but the HFile tool is run at a point in time. It might be best to 
> use UniformReservoir instead.
> Secondly, we do not need sampling for min/max. Let's supplement the histogram 
> with our own calculation which is guaranteed to be accurate for the entirety 
> of the file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] bbeaudreault merged pull request #4757: HBASE-27346 Autodetect key/truststore file type from file extension

2022-09-06 Thread GitBox


bbeaudreault merged PR #4757:
URL: https://github.com/apache/hbase/pull/4757


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (HBASE-27346) Autodetect key/truststore file type from file extension

2022-09-06 Thread Bryan Beaudreault (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bryan Beaudreault resolved HBASE-27346.
---
Fix Version/s: 2.6.0
   3.0.0-alpha-4
   Resolution: Fixed

Pushed to master and branch-2. Thanks [~andor]!

> Autodetect key/truststore file type from file extension
> ---
>
> Key: HBASE-27346
> URL: https://issues.apache.org/jira/browse/HBASE-27346
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Andor Molnar
>Assignee: Andor Molnar
>Priority: Major
>  Labels: SSL, TLS, security
> Fix For: 2.6.0, 3.0.0-alpha-4
>
>
> Noticed that file type autodetection hasn't been properly ported from 
> ZooKeeper although the comment says otherwise.
> Instead of defaulting to JKS we should check the file extension.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27181) Change HBCK2's setRegionState() to use HBCK's setRegionStateInMeta()

2022-09-06 Thread Huaxiang Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huaxiang Sun updated HBASE-27181:
-
Summary: Change HBCK2's setRegionState() to use HBCK's 
setRegionStateInMeta()  (was: Replica region support in HBCK2 setRegionState 
option)

> Change HBCK2's setRegionState() to use HBCK's setRegionStateInMeta()
> 
>
> Key: HBASE-27181
> URL: https://issues.apache.org/jira/browse/HBASE-27181
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck2
>Affects Versions: 2.4.13
>Reporter: Huaxiang Sun
>Assignee: Huaxiang Sun
>Priority: Minor
>
> Replica region id is  not recognized by hbck2's setRegionState as it does not 
> show up in meta. We run into cases that it needs to set region state in meta 
> for replica regions in order to fix inconsistency. We ended up writing the 
> state manually into meta table and did a master failover to sync state from 
> meta table. 
>  
> hbck2's setRegionState needs to support replica region id and handles it 
> nicely.
> Currently, setRegionState does not use 
> MasterRpcServices#setRegionStateInMeta. There is an issue in 
> setRegionStateInMeta to support replica regions. After that is fixed, and 
> setRegionState uses setRegionStateInMeta to set the region state, it will 
> support replica Id.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27181) Change HBCK2's setRegionState() to use HBCK's setRegionStateInMeta()

2022-09-06 Thread Huaxiang Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17601036#comment-17601036
 ] 

Huaxiang Sun commented on HBASE-27181:
--

Instead of current implementation's writing to meta table directly, changed to 
use HBCK's setRegionStateInMeta(). There are couple advantages, one of them  is 
that HBCK's setRegionStateInMeta() will set Master's region state besides 
setting the state in meta table, this will save one active master switchover to 
bring the in-memory state to be consistent with the region state in meta.

> Change HBCK2's setRegionState() to use HBCK's setRegionStateInMeta()
> 
>
> Key: HBASE-27181
> URL: https://issues.apache.org/jira/browse/HBASE-27181
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck2
>Affects Versions: 2.4.13
>Reporter: Huaxiang Sun
>Assignee: Huaxiang Sun
>Priority: Minor
>
> Replica region id is  not recognized by hbck2's setRegionState as it does not 
> show up in meta. We run into cases that it needs to set region state in meta 
> for replica regions in order to fix inconsistency. We ended up writing the 
> state manually into meta table and did a master failover to sync state from 
> meta table. 
>  
> hbck2's setRegionState needs to support replica region id and handles it 
> nicely.
> Currently, setRegionState does not use 
> MasterRpcServices#setRegionStateInMeta. There is an issue in 
> setRegionStateInMeta to support replica regions. After that is fixed, and 
> setRegionState uses setRegionStateInMeta to set the region state, it will 
> support replica Id.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27332) Remove RejectedExecutionHandler for long/short compaction thread pools

2022-09-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17601071#comment-17601071
 ] 

Hudson commented on HBASE-27332:


Results for branch master
[build #675 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/675/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/675/General_20Nightly_20Build_20Report/]




(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/675/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/master/675/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove RejectedExecutionHandler for long/short compaction thread pools
> --
>
> Key: HBASE-27332
> URL: https://issues.apache.org/jira/browse/HBASE-27332
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 3.0.0-alpha-3, 2.4.13
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Minor
> Fix For: 2.6.0, 2.5.1, 3.0.0-alpha-4, 2.4.15
>
>
> As disscussed in https://github.com/apache/hbase/pull/4725
> actually the max size of StealJobQueue is bouded by the VM limit of an array, 
> and the OOM exception occurs before the rejection handler. So StealJobQueue 
> is equivalent to be unbounded. I think the RejectionHandler may bring some 
> confusions and make the codes a little puzzle.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] binlijin commented on pull request #4763: HBASE-27314 Make index block be customized and configured

2022-09-06 Thread GitBox


binlijin commented on PR #4763:
URL: https://github.com/apache/hbase/pull/4763#issuecomment-1238815159

   Master do not have class "HColumnDescriptor.java", branch-2 have,and need to 
change this class,only this different。


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27332) Remove RejectedExecutionHandler for long/short compaction thread pools

2022-09-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17601103#comment-17601103
 ] 

Hudson commented on HBASE-27332:


Results for branch branch-2.5
[build #202 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/202/]:
 (x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/202/General_20Nightly_20Build_20Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/202/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/202/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2.5/202/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Remove RejectedExecutionHandler for long/short compaction thread pools
> --
>
> Key: HBASE-27332
> URL: https://issues.apache.org/jira/browse/HBASE-27332
> Project: HBase
>  Issue Type: Improvement
>  Components: Compaction
>Affects Versions: 3.0.0-alpha-3, 2.4.13
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Minor
> Fix For: 2.6.0, 2.5.1, 3.0.0-alpha-4, 2.4.15
>
>
> As disscussed in https://github.com/apache/hbase/pull/4725
> actually the max size of StealJobQueue is bouded by the VM limit of an array, 
> and the OOM exception occurs before the rejection handler. So StealJobQueue 
> is equivalent to be unbounded. I think the RejectionHandler may bring some 
> confusions and make the codes a little puzzle.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [hbase] thangTang commented on pull request #4764: HBASE-27356 [JDK17] Add-opens java.util

2022-09-06 Thread GitBox


thangTang commented on PR #4764:
URL: https://github.com/apache/hbase/pull/4764#issuecomment-1238840129

   The failed UT seems not related.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@hbase.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (HBASE-27224) HFile tool statistic sampling produces misleading results

2022-09-06 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17601133#comment-17601133
 ] 

Hudson commented on HBASE-27224:


Results for branch branch-2
[build #638 on 
builds.a.o|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/638/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/638/General_20Nightly_20Build_20Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/638/JDK8_20Nightly_20Build_20Report_20_28Hadoop2_29/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/638/JDK8_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(x) {color:red}-1 jdk11 hadoop3 checks{color}
-- For more information [see jdk11 
report|https://ci-hbase.apache.org/job/HBase%20Nightly/job/branch-2/638/JDK11_20Nightly_20Build_20Report_20_28Hadoop3_29/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> HFile tool statistic sampling produces misleading results
> -
>
> Key: HBASE-27224
> URL: https://issues.apache.org/jira/browse/HBASE-27224
> Project: HBase
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Major
>  Labels: patch-available
> Fix For: 2.5.1, 3.0.0-alpha-4
>
>
> HFile tool uses codahale metrics for collecting statistics about key/values 
> in an HFile. We recently had a case where the statistics printed out that the 
> max row size was only 25k. This was confusing because I was seeing bucket 
> cache allocation failures for blocks as large as 1.5mb. 
> Digging in, I was able to find the large row using the "-p" argument (which 
> was obviously very verbose). Once I found the row, I saw the vlen was listed 
> as ~1.5mb which made much more sense.
> First thing I notice here is that default codahale metrics histogram is using 
> ExponentiallyDecayingReservoir. This probably makes sense for a long-lived 
> histogram, but the HFile tool is run at a point in time. It might be best to 
> use UniformReservoir instead.
> Secondly, we do not need sampling for min/max. Let's supplement the histogram 
> with our own calculation which is guaranteed to be accurate for the entirety 
> of the file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27360) The trace related assertion is flaky for async client tests

2022-09-06 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-27360:
-

 Summary: The trace related assertion is flaky for async client 
tests
 Key: HBASE-27360
 URL: https://issues.apache.org/jira/browse/HBASE-27360
 Project: HBase
  Issue Type: Bug
Reporter: Duo Zhang
Assignee: Duo Zhang


https://ci-hbase.apache.org/job/HBase-Flaky-Tests/job/master/4167/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncTableScanner/testScanWrongColumnFamily_0__table_raw__scan_normal_/

The failure message is kinda unreadable... I guess the problem is we do not 
wait enough time as the completion of the span can be executed concurrently 
with normal scan operation.

And also I saw this in the test code

{code}
// RawAsyncTableImpl never invokes the callback to `onScanMetricsCreated` 
-- bug?
{code}

This is not a bug as you need to manually enable scan metrics by calling 
Scan.setScanMetricsEnabled(true).

Let me also fix this.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (HBASE-27360) The trace related assertion is flaky for async client tests

2022-09-06 Thread Duo Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HBASE-27360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17601148#comment-17601148
 ] 

Duo Zhang commented on HBASE-27360:
---

[~ndimiduk] FYI.

> The trace related assertion is flaky for async client tests
> ---
>
> Key: HBASE-27360
> URL: https://issues.apache.org/jira/browse/HBASE-27360
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> https://ci-hbase.apache.org/job/HBase-Flaky-Tests/job/master/4167/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncTableScanner/testScanWrongColumnFamily_0__table_raw__scan_normal_/
> The failure message is kinda unreadable... I guess the problem is we do not 
> wait enough time as the completion of the span can be executed concurrently 
> with normal scan operation.
> And also I saw this in the test code
> {code}
> // RawAsyncTableImpl never invokes the callback to `onScanMetricsCreated` 
> -- bug?
> {code}
> This is not a bug as you need to manually enable scan metrics by calling 
> Scan.setScanMetricsEnabled(true).
> Let me also fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Work started] (HBASE-27360) The trace related assertion is flaky for async client tests

2022-09-06 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-27360 started by Duo Zhang.
-
> The trace related assertion is flaky for async client tests
> ---
>
> Key: HBASE-27360
> URL: https://issues.apache.org/jira/browse/HBASE-27360
> Project: HBase
>  Issue Type: Bug
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> https://ci-hbase.apache.org/job/HBase-Flaky-Tests/job/master/4167/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncTableScanner/testScanWrongColumnFamily_0__table_raw__scan_normal_/
> The failure message is kinda unreadable... I guess the problem is we do not 
> wait enough time as the completion of the span can be executed concurrently 
> with normal scan operation.
> And also I saw this in the test code
> {code}
> // RawAsyncTableImpl never invokes the callback to `onScanMetricsCreated` 
> -- bug?
> {code}
> This is not a bug as you need to manually enable scan metrics by calling 
> Scan.setScanMetricsEnabled(true).
> Let me also fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27360) The trace related assertion is flaky for async client tests

2022-09-06 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-27360:
--
Component/s: test
 tracing

> The trace related assertion is flaky for async client tests
> ---
>
> Key: HBASE-27360
> URL: https://issues.apache.org/jira/browse/HBASE-27360
> Project: HBase
>  Issue Type: Bug
>  Components: test, tracing
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> https://ci-hbase.apache.org/job/HBase-Flaky-Tests/job/master/4167/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncTableScanner/testScanWrongColumnFamily_0__table_raw__scan_normal_/
> The failure message is kinda unreadable... I guess the problem is we do not 
> wait enough time as the completion of the span can be executed concurrently 
> with normal scan operation.
> And also I saw this in the test code
> {code}
> // RawAsyncTableImpl never invokes the callback to `onScanMetricsCreated` 
> -- bug?
> {code}
> This is not a bug as you need to manually enable scan metrics by calling 
> Scan.setScanMetricsEnabled(true).
> Let me also fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (HBASE-27361) Add .flattened-pom.xml to .gitignore

2022-09-06 Thread Duo Zhang (Jira)
Duo Zhang created HBASE-27361:
-

 Summary: Add .flattened-pom.xml to .gitignore
 Key: HBASE-27361
 URL: https://issues.apache.org/jira/browse/HBASE-27361
 Project: HBase
  Issue Type: Improvement
  Components: build
Reporter: Duo Zhang


The flatten plugin will create a .flattened-pom.xml file in each module and 
also root, we should ignore them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (HBASE-27360) The trace related assertions are flaky for async client tests

2022-09-06 Thread Duo Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HBASE-27360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-27360:
--
Summary: The trace related assertions are flaky for async client tests  
(was: The trace related assertion is flaky for async client tests)

> The trace related assertions are flaky for async client tests
> -
>
> Key: HBASE-27360
> URL: https://issues.apache.org/jira/browse/HBASE-27360
> Project: HBase
>  Issue Type: Bug
>  Components: test, tracing
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
>
> https://ci-hbase.apache.org/job/HBase-Flaky-Tests/job/master/4167/testReport/junit/org.apache.hadoop.hbase.client/TestAsyncTableScanner/testScanWrongColumnFamily_0__table_raw__scan_normal_/
> The failure message is kinda unreadable... I guess the problem is we do not 
> wait enough time as the completion of the span can be executed concurrently 
> with normal scan operation.
> And also I saw this in the test code
> {code}
> // RawAsyncTableImpl never invokes the callback to `onScanMetricsCreated` 
> -- bug?
> {code}
> This is not a bug as you need to manually enable scan metrics by calling 
> Scan.setScanMetricsEnabled(true).
> Let me also fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)