[jira] [Commented] (HADOOP-18688) s3a audit info to include #of items in a DeleteObjects request

2023-05-04 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719636#comment-17719636
 ] 

Viraj Jasani commented on HADOOP-18688:
---

Thanks!

PR is ready for review [https://github.com/apache/hadoop/pull/5621]

> s3a audit info to include #of items in a DeleteObjects request
> --
>
> Key: HADOOP-18688
> URL: https://issues.apache.org/jira/browse/HADOOP-18688
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> it would be good to find out how many files were deleted in a DeleteObjects 
> call



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18688) s3a audit info to include #of items in a DeleteObjects request

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18688:

Labels: pull-request-available  (was: )

> s3a audit info to include #of items in a DeleteObjects request
> --
>
> Key: HADOOP-18688
> URL: https://issues.apache.org/jira/browse/HADOOP-18688
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> it would be good to find out how many files were deleted in a DeleteObjects 
> call



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18688) s3a audit info to include #of items in a DeleteObjects request

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719635#comment-17719635
 ] 

ASF GitHub Bot commented on HADOOP-18688:
-

virajjasani commented on PR #5621:
URL: https://github.com/apache/hadoop/pull/5621#issuecomment-1535711181

   `us-west-2`
   
   ```
   mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale
   
   mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dprefetch
   ```




> s3a audit info to include #of items in a DeleteObjects request
> --
>
> Key: HADOOP-18688
> URL: https://issues.apache.org/jira/browse/HADOOP-18688
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>
> it would be good to find out how many files were deleted in a DeleteObjects 
> call



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5621: HADOOP-18688. S3A audit info to include num of items in delete ops

2023-05-04 Thread via GitHub


virajjasani commented on PR #5621:
URL: https://github.com/apache/hadoop/pull/5621#issuecomment-1535711181

   `us-west-2`
   
   ```
   mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale
   
   mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dprefetch
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18732) Exclude Jettison from jersery-json artifact in hadoop-yarn-common's pom.xml

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719625#comment-17719625
 ] 

ASF GitHub Bot commented on HADOOP-18732:
-

devaspatikrishnatri opened a new pull request, #5623:
URL: https://github.com/apache/hadoop/pull/5623

   …version is being pulled
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Exclude Jettison from jersery-json artifact in hadoop-yarn-common's pom.xml
> ---
>
> Key: HADOOP-18732
> URL: https://issues.apache.org/jira/browse/HADOOP-18732
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Devaspati Krishnatri
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18732) Exclude Jettison from jersery-json artifact in hadoop-yarn-common's pom.xml

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719626#comment-17719626
 ] 

ASF GitHub Bot commented on HADOOP-18732:
-

devaspatikrishnatri commented on PR #5623:
URL: https://github.com/apache/hadoop/pull/5623#issuecomment-1535690226

   @szilard-nemeth Could you please see this is it is okay?




> Exclude Jettison from jersery-json artifact in hadoop-yarn-common's pom.xml
> ---
>
> Key: HADOOP-18732
> URL: https://issues.apache.org/jira/browse/HADOOP-18732
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Devaspati Krishnatri
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18732) Exclude Jettison from jersery-json artifact in hadoop-yarn-common's pom.xml

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18732:

Labels: pull-request-available  (was: )

> Exclude Jettison from jersery-json artifact in hadoop-yarn-common's pom.xml
> ---
>
> Key: HADOOP-18732
> URL: https://issues.apache.org/jira/browse/HADOOP-18732
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Devaspati Krishnatri
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] devaspatikrishnatri commented on pull request #5623: HADOOP-18732:Exclude jettison from jersery-json artifact as on older …

2023-05-04 Thread via GitHub


devaspatikrishnatri commented on PR #5623:
URL: https://github.com/apache/hadoop/pull/5623#issuecomment-1535690226

   @szilard-nemeth Could you please see this is it is okay?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] devaspatikrishnatri opened a new pull request, #5623: HADOOP-18732:Exclude jettison from jersery-json artifact as on older …

2023-05-04 Thread via GitHub


devaspatikrishnatri opened a new pull request, #5623:
URL: https://github.com/apache/hadoop/pull/5623

   …version is being pulled
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18732) Exclude Jettison from jersery-json artifact in hadoop-yarn-common's pom.xml

2023-05-04 Thread Devaspati Krishnatri (Jira)
Devaspati Krishnatri created HADOOP-18732:
-

 Summary: Exclude Jettison from jersery-json artifact in 
hadoop-yarn-common's pom.xml
 Key: HADOOP-18732
 URL: https://issues.apache.org/jira/browse/HADOOP-18732
 Project: Hadoop Common
  Issue Type: Task
Reporter: Devaspati Krishnatri






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #5583: HDFS-16987. [BugFix] MarkBlockAsCorrupt should not mark a replica as corrupted if the DN has a newest replica

2023-05-04 Thread via GitHub


Hexiaoqiao commented on PR #5583:
URL: https://github.com/apache/hadoop/pull/5583#issuecomment-1535684657

   @ZanderXu thanks for your detailed comments. IIUC, the replica's GS is 
monotonically increasing at DataNode side, right? If that I think we could 
improve it rely on the basic rule at NameNode side,
   a. ignore one replica with older GS at the same DataNode.
   b. mark corrupt when one replica with same GS NameNode maintained but older 
GS compare to other replicas.
   If this is true, improve `markBlockAsCorrupt` could fix this issue. Please 
correct me if I missing some information. Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #5622: HDFS-16999. Fix wrong use of processFirstBlockReport().

2023-05-04 Thread via GitHub


Hexiaoqiao commented on PR #5622:
URL: https://github.com/apache/hadoop/pull/5622#issuecomment-1535670070

   @zhangshuyan0 Thanks for your proposal and try to fix this issue. Just 
glance the PR, it propose to switch `processFirstBlockReport` to 
`processReport` when restart DataNode only, right? I am concerned the 
performance if we do that.
   a. Is it possible to improve `processFirstBlockReport` to solve this issue?
   b. Will it affect the process logic or performance when add one new DataNode 
to cluster?
   Thanks again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ZanderXu commented on pull request #5583: HDFS-16987. [BugFix] MarkBlockAsCorrupt should not mark a replica as corrupted if the DN has a newest replica

2023-05-04 Thread via GitHub


ZanderXu commented on PR #5583:
URL: https://github.com/apache/hadoop/pull/5583#issuecomment-1535657249

   @Hexiaoqiao @ayushtkn Master, after deep thinking, maybe we can only fix 
this problem when processAllPendingDNMessages, because namenode doesn't know 
whether this report is consistent with the actual replica storage information 
of the DataNode.
   
   **Case1: This report with small GS is postponed report, which is different 
from the actual replica of the datanode.**
   For example:
   
   - The actual replica of DN is: blk_1024_1002
   - The postponed report is: blk_1024_1001
   
   For this case, namenode can ignore this postponed report and doesn't mark it 
as a corrupted replica. 
   
   **Case2: This report with small GS is the newest report, which is same with 
the actual replica of the datanode.**
   For example:
   
   - The actual replica of DN is: blk_1024_1001
   - The report is: blk_1024_1001
   - The storages of this block in namenode already contains this DN
   
   For this case, namenode shouldn't ignore this report, and it should mark 
this replica as a corrupted replica.  Manually modifying block storage files on 
DataNode may cause this problem.
   
   
   At present,  namenode can only consider that each report is the newest 
report, and then modify the status of the block in the memory of namenode, 
because datanode reports the state  to NN through block report or 
blockReceiveAndDelete. 
   
   
   If we modify the logic of `markBlockAsCorrupt`, namenode will can not mark 
the replica as a corrupted replica for case2.
   If we modify the logic of `processAllPendingDNMessages`, the postponed 
message will be temporarily ignored for case 2, and active namenode will mark 
it as a corrupted replica in the next block report of corressponding DN.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5591: HDFS-16991. fix testMkdirsRaceWithObserverRead

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5591:
URL: https://github.com/apache/hadoop/pull/5591#issuecomment-1535655249

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  35m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  trunk passed  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 204m 39s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5591/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 334m 25s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5591/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5591 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux fcb3894283a7 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 44945bf9dc678a43026dac92d350900f5a677136 |
   | Default Java | Red Hat, Inc.-1.8.0_362-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5591/4/testReport/ |
   | Max. process+thread count | 4181 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5591/4/console |
   | versions | git=2.9.5 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhangshuyan0 opened a new pull request, #5622: HDFS-16999. Fix wrong use of processFirstBlockReport().

2023-05-04 Thread via GitHub


zhangshuyan0 opened a new pull request, #5622:
URL: https://github.com/apache/hadoop/pull/5622

   ### Description of PR
   `processFirstBlockReport()` is used to process first block report from 
datanode. It does not calculating `toRemove` list because it believes that 
there is no metadata about the datanode in the namenode. However, If a datanode 
is re registered after restarting, its `blockReportCount` will be updated to 0. 
   
https://github.com/apache/hadoop/blob/c7699d3dcd4f8feaf2c5ae5943b8a4cec738e95d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeDescriptor.java#L1007-L1012
   That is to say, the first block report after a datanode restarts will be 
processed by `processFirstBlockReport()`. 
   
https://github.com/apache/hadoop/blob/c7699d3dcd4f8feaf2c5ae5943b8a4cec738e95d/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java#L2916-L2925
   This is unreasonable because the metadata of the datanode already exists in 
namenode at this time, and if redundant replica metadata is not removed in 
time, the blocks with insufficient replicas cannot be reconstruct in time, 
which increases the risk of missing block. In summary, 
`processFirstBlockReport()` should only be used when the namenode restarts, not 
when the datanode restarts.
   ### How was this patch tested?
   Add a new unit test.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth closed pull request #5380: YARN-11079. Make an AbstractParentQueue to store common ParentQueue and ManagedParentQueue functionality

2023-05-04 Thread via GitHub


szilard-nemeth closed pull request #5380: YARN-11079. Make an 
AbstractParentQueue to store common ParentQueue and ManagedParentQueue 
functionality
URL: https://github.com/apache/hadoop/pull/5380


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth commented on pull request #5380: YARN-11079. Make an AbstractParentQueue to store common ParentQueue and ManagedParentQueue functionality

2023-05-04 Thread via GitHub


szilard-nemeth commented on PR #5380:
URL: https://github.com/apache/hadoop/pull/5380#issuecomment-1535613449

   Thanks @susheel-gupta for the explanation,
   Makes sense.
   Latest patch LGTM, committed to trunk.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18723) Add detail logs if distcp checksum mismatch

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719588#comment-17719588
 ] 

ASF GitHub Bot commented on HADOOP-18723:
-

symious commented on code in PR #5603:
URL: https://github.com/apache/hadoop/pull/5603#discussion_r1185646826


##
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java:
##
@@ -596,6 +596,8 @@ public static CopyMapper.ChecksumComparison 
checksumsAreEqual(
 } else if (sourceChecksum.equals(targetChecksum)) {
   return CopyMapper.ChecksumComparison.TRUE;
 }
+LOG.info("Checksum not equal. Source checksum: {}, target checksum: {}",
+sourceChecksum, targetChecksum);

Review Comment:
   In our failed jobs, the mismatch source and target path was printed by 
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L633.
   
   Thanks for pointing that out. The paths have also been added to DistCpUtils.



##
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java:
##
@@ -569,6 +572,7 @@ private void testCommitWithChecksumMismatch(boolean skipCrc)
 fs, new Path(sourceBase + srcFilename), null,
 fs, new Path(targetBase + srcFilename),
 sourceCurrStatus.getLen()));
+assertThat(log.getOutput(), containsString("Checksum not equal"));

Review Comment:
   Copied the hamcrest version from 
"org.apache.hadoop.conf.TestReconfiguration". 
   
   But changed to the Assertions.assertThat one.
   
   





> Add detail logs if distcp checksum mismatch
> ---
>
> Key: HADOOP-18723
> URL: https://issues.apache.org/jira/browse/HADOOP-18723
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Major
>  Labels: pull-request-available
>
> We encountered some errors of mismatch checksum during Distcp jobs. It took 
> us some time to figure out that checksum type is different.
> Adding error logs shall help us to figure out such problems faster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] symious commented on a diff in pull request #5603: HADOOP-18723. Add detail logs if distcp checksum mismatch

2023-05-04 Thread via GitHub


symious commented on code in PR #5603:
URL: https://github.com/apache/hadoop/pull/5603#discussion_r1185646826


##
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java:
##
@@ -596,6 +596,8 @@ public static CopyMapper.ChecksumComparison 
checksumsAreEqual(
 } else if (sourceChecksum.equals(targetChecksum)) {
   return CopyMapper.ChecksumComparison.TRUE;
 }
+LOG.info("Checksum not equal. Source checksum: {}, target checksum: {}",
+sourceChecksum, targetChecksum);

Review Comment:
   In our failed jobs, the mismatch source and target path was printed by 
https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L633.
   
   Thanks for pointing that out. The paths have also been added to DistCpUtils.



##
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java:
##
@@ -569,6 +572,7 @@ private void testCommitWithChecksumMismatch(boolean skipCrc)
 fs, new Path(sourceBase + srcFilename), null,
 fs, new Path(targetBase + srcFilename),
 sourceCurrStatus.getLen()));
+assertThat(log.getOutput(), containsString("Checksum not equal"));

Review Comment:
   Copied the hamcrest version from 
"org.apache.hadoop.conf.TestReconfiguration". 
   
   But changed to the Assertions.assertThat one.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18688) s3a audit info to include #of items in a DeleteObjects request

2023-05-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-18688:
-

Assignee: Viraj Jasani

> s3a audit info to include #of items in a DeleteObjects request
> --
>
> Key: HADOOP-18688
> URL: https://issues.apache.org/jira/browse/HADOOP-18688
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>
> it would be good to find out how many files were deleted in a DeleteObjects 
> call



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18671) Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719566#comment-17719566
 ] 

ASF GitHub Bot commented on HADOOP-18671:
-

jojochuang opened a new pull request, #5620:
URL: https://github.com/apache/hadoop/pull/5620

   The HDFS lease APIs have been replicated as interfaces in hadoop-common so 
other filesystems can also implement them.  Applications which use the leasing 
APIs should migrate to the new interface where possible.
   
   Contributed by Stephen Wu
   
   (cherry picked from commit 0e46388474f52cd38578d946c26ba762be05ef1f)
   
Conflicts:

hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java

hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystem.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRetryCacheMetrics.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFSImageWithOrderedSnapshotDeletion.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOrderedSnapshotDeletion.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForErasureCodingPolicy.java
   
   Change-Id: I2ccd0b6780a86610df61d8528e681db0451e2e4c (cherry picked from 
commit 207972692aebd02adda0458fd85fc7e813396fc8)
   
Conflicts:

hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonPathCapabilities.java

hadoop-common-project/hadoop-common/src/site/markdown/filesystem/index.md

hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java

hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java

hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java

hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcSingleNS.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestNNHealthCheck.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestXAttrWithSnapshot.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForStoragePolicy.java
   
   Change-Id: Ieaa669eb43eb6c1b79b48ff862b438ae9611c08e
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem
> 
>
> Key: HADOOP-18671
> URL: https://issues.apache.org/jira/browse/HADOOP-18671
>   

[GitHub] [hadoop] jojochuang opened a new pull request, #5620: HADOOP-18671. Add recoverLease(), setSafeMode(), isFileClosed() as interfaces to hadoop-common (#5553)

2023-05-04 Thread via GitHub


jojochuang opened a new pull request, #5620:
URL: https://github.com/apache/hadoop/pull/5620

   The HDFS lease APIs have been replicated as interfaces in hadoop-common so 
other filesystems can also implement them.  Applications which use the leasing 
APIs should migrate to the new interface where possible.
   
   Contributed by Stephen Wu
   
   (cherry picked from commit 0e46388474f52cd38578d946c26ba762be05ef1f)
   
Conflicts:

hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java

hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystem.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRetryCacheMetrics.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFSImageWithOrderedSnapshotDeletion.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOrderedSnapshotDeletion.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForErasureCodingPolicy.java
   
   Change-Id: I2ccd0b6780a86610df61d8528e681db0451e2e4c (cherry picked from 
commit 207972692aebd02adda0458fd85fc7e813396fc8)
   
Conflicts:

hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonPathCapabilities.java

hadoop-common-project/hadoop-common/src/site/markdown/filesystem/index.md

hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java

hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/ViewDistributedFileSystem.java

hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java

hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpcSingleNS.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestRollingUpgrade.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImage.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestNNHealthCheck.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestXAttrWithSnapshot.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForStoragePolicy.java
   
   Change-Id: Ieaa669eb43eb6c1b79b48ff862b438ae9611c08e
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Assigned] (HADOOP-18731) Backport HADOOP-18671 to branch-3.2

2023-05-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-18731:


Assignee: Wei-Chiu Chuang

> Backport HADOOP-18671 to branch-3.2
> ---
>
> Key: HADOOP-18731
> URL: https://issues.apache.org/jira/browse/HADOOP-18731
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18671) Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719542#comment-17719542
 ] 

ASF GitHub Bot commented on HADOOP-18671:
-

jojochuang opened a new pull request, #5619:
URL: https://github.com/apache/hadoop/pull/5619

   The HDFS lease APIs have been replicated as interfaces in hadoop-common so 
other filesystems can also implement them.  Applications which use the leasing 
APIs should migrate to the new interface where possible.
   
   Contributed by Stephen Wu
   
   (cherry picked from commit 0e46388474f52cd38578d946c26ba762be05ef1f)
   
Conflicts:

hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java

hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystem.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRetryCacheMetrics.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFSImageWithOrderedSnapshotDeletion.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOrderedSnapshotDeletion.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForErasureCodingPolicy.java
   
   Change-Id: I2ccd0b6780a86610df61d8528e681db0451e2e4c
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem
> 
>
> Key: HADOOP-18671
> URL: https://issues.apache.org/jira/browse/HADOOP-18671
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Assignee: Tak-Lon (Stephen) Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> We are in the midst of enabling HBase and Solr to run on Ozone.
> An obstacle is that HBase relies heavily on HDFS APIs and semantics for its 
> Write Ahead Log (WAL) file (similarly, for Solr's transaction log). We 
> propose to push up these HDFS APIs, i.e. recoverLease(), setSafeMode(), 
> isFileClosed() to FileSystem abstraction so that HBase and other applications 
> do not need to take on Ozone dependency at compile time. This work will 
> (hopefully) enable HBase to run on other storage system implementations in 
> the future.
> There are other HDFS features that HBase uses, including hedged read and 
> favored nodes. Those are FS-specific optimizations and are not critical to 
> enable HBase on Ozone.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang opened a new pull request, #5619: HADOOP-18671. Add recoverLease(), setSafeMode(), isFileClosed() as interfaces to hadoop-common (#5553)

2023-05-04 Thread via GitHub


jojochuang opened a new pull request, #5619:
URL: https://github.com/apache/hadoop/pull/5619

   The HDFS lease APIs have been replicated as interfaces in hadoop-common so 
other filesystems can also implement them.  Applications which use the leasing 
APIs should migrate to the new interface where possible.
   
   Contributed by Stephen Wu
   
   (cherry picked from commit 0e46388474f52cd38578d946c26ba762be05ef1f)
   
Conflicts:

hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java

hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSUpgrade.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestEncryptionZones.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestViewDistributedFileSystem.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithAcl.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSImageWithSnapshot.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNameNodeRetryCacheMetrics.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFSImageWithOrderedSnapshotDeletion.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOrderedSnapshotDeletion.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewer.java

hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/TestOfflineImageViewerForErasureCodingPolicy.java
   
   Change-Id: I2ccd0b6780a86610df61d8528e681db0451e2e4c
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18724) Open file fails with NumberFormatException for S3AFileSystem

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719530#comment-17719530
 ] 

ASF GitHub Bot commented on HADOOP-18724:
-

hadoop-yetus commented on PR #5611:
URL: https://github.com/apache/hadoop/pull/5611#issuecomment-1535415371

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 14s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  15m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  14m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m  8s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   4m  2s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   8m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  14m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  14m 21s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5611/2/artifact/out/blanks-eol.txt)
 |  The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 36s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5611/2/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 351 unchanged - 0 fixed = 353 total (was 
351)  |
   | +1 :green_heart: |  mvnsite  |   5m  5s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   4m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   8m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 20s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 42s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   7m  0s |  |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m  8s |  |  hadoop-mapreduce-examples in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 40s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  3s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 234m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5611/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5611 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 0d7490c92bcb 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5611: HADOOP-18724. Open file fails with NumberFormatException for S3AFileSystem

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5611:
URL: https://github.com/apache/hadoop/pull/5611#issuecomment-1535415371

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 14s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  15m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  14m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m  8s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   4m  2s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   8m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  14m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  14m 21s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5611/2/artifact/out/blanks-eol.txt)
 |  The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 36s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5611/2/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 351 unchanged - 0 fixed = 353 total (was 
351)  |
   | +1 :green_heart: |  mvnsite  |   5m  5s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   4m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   8m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 20s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 42s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   7m  0s |  |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m  8s |  |  hadoop-mapreduce-examples in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 40s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  3s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 234m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5611/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5611 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 0d7490c92bcb 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / aaf35ac212f0c048849ab9d97ed70c220b9afbc3 |
   | Default Java | Private 

[jira] [Commented] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719529#comment-17719529
 ] 

ASF GitHub Bot commented on HADOOP-18706:
-

hadoop-yetus commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1535407973

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 19s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 101m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5563 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5f590dc76341 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7b5f48a240105c6f8e203780d00e31852a11c92b |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/10/testReport/ |
   | Max. process+thread count | 530 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/10/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> The temporary files for 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5563: HADOOP-18706: Improve S3ABlockOutputStream recovery

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1535407973

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 19s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 101m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5563 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5f590dc76341 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7b5f48a240105c6f8e203780d00e31852a11c92b |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/10/testReport/ |
   | Max. process+thread count | 530 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/10/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please 

[jira] [Commented] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719516#comment-17719516
 ] 

ASF GitHub Bot commented on HADOOP-18706:
-

cbevard1 commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1535323312

   Shoot. That was in the last checkstyle, and I thought I bumped it out to 
113. My IDE says it was indented to 113, but it wasn't. I should have run check 
style before committing. Anyways, my mistake. I reviewed the check style report 
before committing this time and it should be good now. 




> The temporary files for disk-block buffer aren't unique enough to recover 
> partial uploads. 
> ---
>
> Key: HADOOP-18706
> URL: https://issues.apache.org/jira/browse/HADOOP-18706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Chris Bevard
>Priority: Minor
>  Labels: pull-request-available
>
> If an application crashes during an S3ABlockOutputStream upload, it's 
> possible to complete the upload if fast.upload.buffer is set to disk by 
> uploading the s3ablock file with putObject as the final part of the multipart 
> upload. If the application has multiple uploads running in parallel though 
> and they're on the same part number when the application fails, then there is 
> no way to determine which file belongs to which object, and recovery of 
> either upload is impossible.
> If the temporary file name for disk buffering included the s3 key, then every 
> partial upload would be recoverable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18723) Add detail logs if distcp checksum mismatch

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719515#comment-17719515
 ] 

ASF GitHub Bot commented on HADOOP-18723:
-

ayushtkn commented on code in PR #5603:
URL: https://github.com/apache/hadoop/pull/5603#discussion_r1185326966


##
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java:
##
@@ -596,6 +596,8 @@ public static CopyMapper.ChecksumComparison 
checksumsAreEqual(
 } else if (sourceChecksum.equals(targetChecksum)) {
   return CopyMapper.ChecksumComparison.TRUE;
 }
+LOG.info("Checksum not equal. Source checksum: {}, target checksum: {}",
+sourceChecksum, targetChecksum);

Review Comment:
   you aren't putting the paths for which checksum mismatch happened, if there 
are thousands of file being copied and bunch of them log this.
   How would you figure out whose checksum didn't match





> Add detail logs if distcp checksum mismatch
> ---
>
> Key: HADOOP-18723
> URL: https://issues.apache.org/jira/browse/HADOOP-18723
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Major
>  Labels: pull-request-available
>
> We encountered some errors of mismatch checksum during Distcp jobs. It took 
> us some time to figure out that checksum type is different.
> Adding error logs shall help us to figure out such problems faster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cbevard1 commented on pull request #5563: HADOOP-18706: Improve S3ABlockOutputStream recovery

2023-05-04 Thread via GitHub


cbevard1 commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1535323312

   Shoot. That was in the last checkstyle, and I thought I bumped it out to 
113. My IDE says it was indented to 113, but it wasn't. I should have run check 
style before committing. Anyways, my mistake. I reviewed the check style report 
before committing this time and it should be good now. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5603: HADOOP-18723. Add detail logs if distcp checksum mismatch

2023-05-04 Thread via GitHub


ayushtkn commented on code in PR #5603:
URL: https://github.com/apache/hadoop/pull/5603#discussion_r1185326966


##
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java:
##
@@ -596,6 +596,8 @@ public static CopyMapper.ChecksumComparison 
checksumsAreEqual(
 } else if (sourceChecksum.equals(targetChecksum)) {
   return CopyMapper.ChecksumComparison.TRUE;
 }
+LOG.info("Checksum not equal. Source checksum: {}, target checksum: {}",
+sourceChecksum, targetChecksum);

Review Comment:
   you aren't putting the paths for which checksum mismatch happened, if there 
are thousands of file being copied and bunch of them log this.
   How would you figure out whose checksum didn't match



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719487#comment-17719487
 ] 

ASF GitHub Bot commented on HADOOP-18706:
-

steveloughran commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1535241489

   thanks for the run. final bit of outstanding style. i know there are lots 
already, but new code should always start well, whenever possible
   ```
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockOutputArray.java:114:
diskBlockFactory.create("spanId", s3Key, 1, blockSize, null);: 
'diskBlockFactory' has incorrect indentation level 12, expected level should be 
13. [Indentation]
   ```
   




> The temporary files for disk-block buffer aren't unique enough to recover 
> partial uploads. 
> ---
>
> Key: HADOOP-18706
> URL: https://issues.apache.org/jira/browse/HADOOP-18706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Chris Bevard
>Priority: Minor
>  Labels: pull-request-available
>
> If an application crashes during an S3ABlockOutputStream upload, it's 
> possible to complete the upload if fast.upload.buffer is set to disk by 
> uploading the s3ablock file with putObject as the final part of the multipart 
> upload. If the application has multiple uploads running in parallel though 
> and they're on the same part number when the application fails, then there is 
> no way to determine which file belongs to which object, and recovery of 
> either upload is impossible.
> If the temporary file name for disk buffering included the s3 key, then every 
> partial upload would be recoverable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5563: HADOOP-18706: Improve S3ABlockOutputStream recovery

2023-05-04 Thread via GitHub


steveloughran commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1535241489

   thanks for the run. final bit of outstanding style. i know there are lots 
already, but new code should always start well, whenever possible
   ```
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockOutputArray.java:114:
diskBlockFactory.create("spanId", s3Key, 1, blockSize, null);: 
'diskBlockFactory' has incorrect indentation level 12, expected level should be 
13. [Indentation]
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18727) Fix WriteOperations.listMultipartUploads function description

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719485#comment-17719485
 ] 

ASF GitHub Bot commented on HADOOP-18727:
-

steveloughran commented on PR #5613:
URL: https://github.com/apache/hadoop/pull/5613#issuecomment-1535237726

   +in branch-3.3; recompiled first as a safety check




> Fix WriteOperations.listMultipartUploads function description
> -
>
> Key: HADOOP-18727
> URL: https://issues.apache.org/jira/browse/HADOOP-18727
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Affects Versions: 3.3.2, 3.3.3, 3.3.4
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> {code:java}
>    /**
> -   * Abort multipart uploads under a path: limited to the first
> -   * few hundred.
> -   * @param prefix prefix for uploads to abort
> -   * @return a count of aborts
> -   * @throws IOException trouble; FileNotFoundExceptions are swallowed.
> +   * List in-progress multipart uploads under a path
> +   * @param prefix prefix for uploads to list
> +   * @return a list of in-progress multipart uploads
> +   * @throws IOException on problems
>     */
>    List listMultipartUploads(String prefix) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5613: HADOOP-18727. Fix `WriteOperations.listMultipartUploads` function description

2023-05-04 Thread via GitHub


steveloughran commented on PR #5613:
URL: https://github.com/apache/hadoop/pull/5613#issuecomment-1535237726

   +in branch-3.3; recompiled first as a safety check


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18727) Fix WriteOperations.listMultipartUploads function description

2023-05-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18727.
-
Fix Version/s: 3.4.0
   3.3.9
 Assignee: Dongjoon Hyun
   Resolution: Fixed

> Fix WriteOperations.listMultipartUploads function description
> -
>
> Key: HADOOP-18727
> URL: https://issues.apache.org/jira/browse/HADOOP-18727
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Affects Versions: 3.3.2, 3.3.3, 3.3.4
>Reporter: Dongjoon Hyun
>Assignee: Dongjoon Hyun
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> {code:java}
>    /**
> -   * Abort multipart uploads under a path: limited to the first
> -   * few hundred.
> -   * @param prefix prefix for uploads to abort
> -   * @return a count of aborts
> -   * @throws IOException trouble; FileNotFoundExceptions are swallowed.
> +   * List in-progress multipart uploads under a path
> +   * @param prefix prefix for uploads to list
> +   * @return a list of in-progress multipart uploads
> +   * @throws IOException on problems
>     */
>    List listMultipartUploads(String prefix) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18028) High performance S3A input stream with prefetching & caching

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719484#comment-17719484
 ] 

ASF GitHub Bot commented on HADOOP-18028:
-

steveloughran closed pull request #5605: HADOOP-18028. High performance S3A 
input stream (#4752)
URL: https://github.com/apache/hadoop/pull/5605




> High performance S3A input stream with prefetching & caching
> 
>
> Key: HADOOP-18028
> URL: https://issues.apache.org/jira/browse/HADOOP-18028
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Bhalchandra Pandit
>Assignee: Bhalchandra Pandit
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>  Time Spent: 14.5h
>  Remaining Estimate: 0h
>
> I work for Pinterest. I developed a technique for vastly improving read 
> throughput when reading from the S3 file system. It not only helps the 
> sequential read case (like reading a SequenceFile) but also significantly 
> improves read throughput of a random access case (like reading Parquet). This 
> technique has been very useful in significantly improving efficiency of the 
> data processing jobs at Pinterest. 
>  
> I would like to contribute that feature to Apache Hadoop. More details on 
> this technique are available in this blog I wrote recently:
> [https://medium.com/pinterest-engineering/improving-efficiency-and-reducing-runtime-using-s3-read-optimization-b31da4b60fa0]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #5605: HADOOP-18028. High performance S3A input stream (#4752)

2023-05-04 Thread via GitHub


steveloughran closed pull request #5605: HADOOP-18028. High performance S3A 
input stream (#4752)
URL: https://github.com/apache/hadoop/pull/5605


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18724) Open file fails with NumberFormatException for S3AFileSystem

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719481#comment-17719481
 ] 

ASF GitHub Bot commented on HADOOP-18724:
-

steveloughran commented on PR #5611:
URL: https://github.com/apache/hadoop/pull/5611#issuecomment-1535233676

   testing
   * azure cardiff:  -Dparallel-tests=abfs -DtestsThreadCount=6
   * s3 london:  -Dparallel-tests -DtestsThreadCount=8
   




> Open file fails with NumberFormatException for S3AFileSystem
> 
>
> Key: HADOOP-18724
> URL: https://issues.apache.org/jira/browse/HADOOP-18724
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Ayush Saxena
>Assignee: Steve Loughran
>Priority: Critical
>  Labels: pull-request-available
>
> Saw the trace for Hive-Iceberg, was using the old client, doesn't happen once 
> I upgraded.
> {noformat}
> Caused by: java.lang.NumberFormatException: For input string: "5783.0"
>   at 
> java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
>   at java.base/java.lang.Long.parseLong(Long.java:692)
>   at java.base/java.lang.Long.parseLong(Long.java:817)
>   at org.apache.hadoop.conf.Configuration.getLong(Configuration.java:1601)
>   at 
> org.apache.hadoop.fs.s3a.impl.OpenFileSupport.prepareToOpenFile(OpenFileSupport.java:262)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.openFileWithOptions(S3AFileSystem.java:5219)
>   at 
> org.apache.hadoop.fs.FileSystem$FSDataInputStreamBuilder.build(FileSystem.java:4753)
>   at 
> org.apache.iceberg.hadoop.HadoopInputFile.newStream(HadoopInputFile.java:196)
>   at 
> org.apache.iceberg.avro.AvroIterable.newFileReader(AvroIterable.java:101)
>   at 
> org.apache.iceberg.avro.AvroIterable.getMetadata(AvroIterable.java:66){noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5611: HADOOP-18724. Open file fails with NumberFormatException for S3AFileSystem

2023-05-04 Thread via GitHub


steveloughran commented on PR #5611:
URL: https://github.com/apache/hadoop/pull/5611#issuecomment-1535233676

   testing
   * azure cardiff:  -Dparallel-tests=abfs -DtestsThreadCount=6
   * s3 london:  -Dparallel-tests -DtestsThreadCount=8
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5403: YARN-11340. [Federation] Improve SQLFederationStateStore DataSource Config.

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5403:
URL: https://github.com/apache/hadoop/pull/5403#issuecomment-1535201147

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  9s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   7m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   6m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 28s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  24m 44s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   6m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   6m 43s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 38s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  2s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 15s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 12s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 158m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5403/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5403 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux db24c976a9a6 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 03d85b71487b947f4457e9d312605b8d37455738 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5403/12/testReport/ |
   | Max. process+thread count | 567 (vs. ulimit of 5500) |
   | modules 

[GitHub] [hadoop] ayushtkn commented on pull request #5200: HDFS-16865. RBF: The source path is always / after RBF proxied the complete, addBlock and getAdditionalDatanode RPC.

2023-05-04 Thread via GitHub


ayushtkn commented on PR #5200:
URL: https://github.com/apache/hadoop/pull/5200#issuecomment-1535185336

   @goiri does this makes sense? Will be holding this for you


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18723) Add detail logs if distcp checksum mismatch

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719460#comment-17719460
 ] 

ASF GitHub Bot commented on HADOOP-18723:
-

ayushtkn commented on code in PR #5603:
URL: https://github.com/apache/hadoop/pull/5603#discussion_r1185336347


##
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java:
##
@@ -569,6 +572,7 @@ private void testCommitWithChecksumMismatch(boolean skipCrc)
 fs, new Path(sourceBase + srcFilename), null,
 fs, new Path(targetBase + srcFilename),
 sourceCurrStatus.getLen()));
+assertThat(log.getOutput(), containsString("Checksum not equal"));

Review Comment:
   wrong import & assertThat.
   This import
   ```
   import static org.assertj.core.api.Assertions.assertThat;
   ```
   
   and it works like this
   ```
   assertThat(log.getOutput()).contains("Checksum not equal");
   ```





> Add detail logs if distcp checksum mismatch
> ---
>
> Key: HADOOP-18723
> URL: https://issues.apache.org/jira/browse/HADOOP-18723
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Major
>  Labels: pull-request-available
>
> We encountered some errors of mismatch checksum during Distcp jobs. It took 
> us some time to figure out that checksum type is different.
> Adding error logs shall help us to figure out such problems faster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5603: HADOOP-18723. Add detail logs if distcp checksum mismatch

2023-05-04 Thread via GitHub


ayushtkn commented on code in PR #5603:
URL: https://github.com/apache/hadoop/pull/5603#discussion_r1185336347


##
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java:
##
@@ -569,6 +572,7 @@ private void testCommitWithChecksumMismatch(boolean skipCrc)
 fs, new Path(sourceBase + srcFilename), null,
 fs, new Path(targetBase + srcFilename),
 sourceCurrStatus.getLen()));
+assertThat(log.getOutput(), containsString("Checksum not equal"));

Review Comment:
   wrong import & assertThat.
   This import
   ```
   import static org.assertj.core.api.Assertions.assertThat;
   ```
   
   and it works like this
   ```
   assertThat(log.getOutput()).contains("Checksum not equal");
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5609: YARN-11470. FederationStateStoreFacade Cache Support Guava Cache.

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5609:
URL: https://github.com/apache/hadoop/pull/5609#issuecomment-1535182179

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 53s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   7m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   6m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +0 :ok: |  spotbugs  |   0m 29s |  |  
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site no spotbugs output file 
(spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  23m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   6m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   6m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   6m 40s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 38s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +0 :ok: |  spotbugs  |   0m 25s |  |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-site has no data from spotbugs  |
   | +1 :green_heart: |  shadedclient  |  23m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  2s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 16s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 15s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 24s |  |  hadoop-yarn-site in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 163m 19s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5609/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5609 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint 
markdownlint |
   | uname | Linux c0c746febe26 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e9352f7493ce7119ed687f6509bfacae55977285 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 

[jira] [Commented] (HADOOP-18723) Add detail logs if distcp checksum mismatch

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719448#comment-17719448
 ] 

ASF GitHub Bot commented on HADOOP-18723:
-

ayushtkn commented on code in PR #5603:
URL: https://github.com/apache/hadoop/pull/5603#discussion_r1185326966


##
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java:
##
@@ -596,6 +596,8 @@ public static CopyMapper.ChecksumComparison 
checksumsAreEqual(
 } else if (sourceChecksum.equals(targetChecksum)) {
   return CopyMapper.ChecksumComparison.TRUE;
 }
+LOG.info("Checksum not equal. Source checksum: {}, target checksum: {}",
+sourceChecksum, targetChecksum);

Review Comment:
   you aren't putting the paths for which checksum mismatch happened, if there 
are thousands of file being copied and bunch of them log this.
   How would you figure out who checksum didn't match





> Add detail logs if distcp checksum mismatch
> ---
>
> Key: HADOOP-18723
> URL: https://issues.apache.org/jira/browse/HADOOP-18723
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Janus Chow
>Assignee: Janus Chow
>Priority: Major
>  Labels: pull-request-available
>
> We encountered some errors of mismatch checksum during Distcp jobs. It took 
> us some time to figure out that checksum type is different.
> Adding error logs shall help us to figure out such problems faster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5603: HADOOP-18723. Add detail logs if distcp checksum mismatch

2023-05-04 Thread via GitHub


ayushtkn commented on code in PR #5603:
URL: https://github.com/apache/hadoop/pull/5603#discussion_r1185326966


##
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java:
##
@@ -596,6 +596,8 @@ public static CopyMapper.ChecksumComparison 
checksumsAreEqual(
 } else if (sourceChecksum.equals(targetChecksum)) {
   return CopyMapper.ChecksumComparison.TRUE;
 }
+LOG.info("Checksum not equal. Source checksum: {}, target checksum: {}",
+sourceChecksum, targetChecksum);

Review Comment:
   you aren't putting the paths for which checksum mismatch happened, if there 
are thousands of file being copied and bunch of them log this.
   How would you figure out who checksum didn't match



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18727) Fix WriteOperations.listMultipartUploads function description

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719446#comment-17719446
 ] 

ASF GitHub Bot commented on HADOOP-18727:
-

dongjoon-hyun commented on PR #5613:
URL: https://github.com/apache/hadoop/pull/5613#issuecomment-1535169783

   Thank you, @steveloughran .




> Fix WriteOperations.listMultipartUploads function description
> -
>
> Key: HADOOP-18727
> URL: https://issues.apache.org/jira/browse/HADOOP-18727
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Affects Versions: 3.3.2, 3.3.3, 3.3.4
>Reporter: Dongjoon Hyun
>Priority: Trivial
>  Labels: pull-request-available
>
> {code:java}
>    /**
> -   * Abort multipart uploads under a path: limited to the first
> -   * few hundred.
> -   * @param prefix prefix for uploads to abort
> -   * @return a count of aborts
> -   * @throws IOException trouble; FileNotFoundExceptions are swallowed.
> +   * List in-progress multipart uploads under a path
> +   * @param prefix prefix for uploads to list
> +   * @return a list of in-progress multipart uploads
> +   * @throws IOException on problems
>     */
>    List listMultipartUploads(String prefix) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dongjoon-hyun commented on pull request #5613: HADOOP-18727. Fix `WriteOperations.listMultipartUploads` function description

2023-05-04 Thread via GitHub


dongjoon-hyun commented on PR #5613:
URL: https://github.com/apache/hadoop/pull/5613#issuecomment-1535169783

   Thank you, @steveloughran .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18729) Fix mvnsite on Windows 10

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719440#comment-17719440
 ] 

ASF GitHub Bot commented on HADOOP-18729:
-

hadoop-yetus commented on PR #5618:
URL: https://github.com/apache/hadoop/pull/5618#issuecomment-1535156164

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  15m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  shadedclient  | 106m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  16m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  15m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  shadedclient  |  27m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 23s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 191m  4s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5618/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5618 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 1f8ec356f7d7 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3b679fac7e905873f8c5783634bde1182d909377 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5618/1/testReport/ |
   | Max. process+thread count | 2427 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5618/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Fix mvnsite on Windows 10
> -
>
> Key: HADOOP-18729
> URL: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5618: HADOOP-18729. Fix mvnsite on Windows 10

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5618:
URL: https://github.com/apache/hadoop/pull/5618#issuecomment-1535156164

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  15m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  shadedclient  | 106m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  16m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  15m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  shadedclient  |  27m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 23s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 191m  4s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5618/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5618 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 1f8ec356f7d7 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3b679fac7e905873f8c5783634bde1182d909377 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5618/1/testReport/ |
   | Max. process+thread count | 2427 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5618/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5616: YARN-11477. [Federation] MemoryFederationStateStore Support Store ApplicationSubmitData.

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5616:
URL: https://github.com/apache/hadoop/pull/5616#issuecomment-1535086637

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  22m 11s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   2m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   2m  1s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  3s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5616/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt)
 |  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 16s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 30s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 113m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5616/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5616 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1aa11433a711 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / aa5d2cb24d275f167337a6082c53f56f99c9d327 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5616/2/testReport/ |
   | Max. process+thread count | 620 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Updated] (HADOOP-18731) Backport HADOOP-18671 to branch-3.2

2023-05-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-18731:
-
Target Version/s: 3.2.5  (was: 3.3.6)

> Backport HADOOP-18671 to branch-3.2
> ---
>
> Key: HADOOP-18731
> URL: https://issues.apache.org/jira/browse/HADOOP-18731
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5616: YARN-11477. [Federation] MemoryFederationStateStore Support Store ApplicationSubmitData.

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5616:
URL: https://github.com/apache/hadoop/pull/5616#issuecomment-1535079525

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 25s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   2m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   2m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   2m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  2s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5616/3/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server.txt)
 |  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  9s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 30s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 107m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5616/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5616 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6347918c30c5 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / aa5d2cb24d275f167337a6082c53f56f99c9d327 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5616/3/testReport/ |
   | Max. process+thread count | 566 (vs. ulimit of 5500) |
   | modules | C: 

[jira] [Updated] (HADOOP-18730) Backport HADOOP-18671 to branch-3.3

2023-05-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-18730:
-
Target Version/s: 3.3.6

> Backport HADOOP-18671 to branch-3.3
> ---
>
> Key: HADOOP-18730
> URL: https://issues.apache.org/jira/browse/HADOOP-18730
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18731) Backport HADOOP-18671 to branch-3.2

2023-05-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-18731:
-
Target Version/s: 3.3.6

> Backport HADOOP-18671 to branch-3.2
> ---
>
> Key: HADOOP-18731
> URL: https://issues.apache.org/jira/browse/HADOOP-18731
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18730) Backport HADOOP-18671 to branch-3.3

2023-05-04 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-18730:


 Summary: Backport HADOOP-18671 to branch-3.3
 Key: HADOOP-18730
 URL: https://issues.apache.org/jira/browse/HADOOP-18730
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Wei-Chiu Chuang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18731) Backport HADOOP-18671 to branch-3.2

2023-05-04 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HADOOP-18731:


 Summary: Backport HADOOP-18671 to branch-3.2
 Key: HADOOP-18731
 URL: https://issues.apache.org/jira/browse/HADOOP-18731
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Wei-Chiu Chuang






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18730) Backport HADOOP-18671 to branch-3.3

2023-05-04 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-18730:


Assignee: Wei-Chiu Chuang

> Backport HADOOP-18671 to branch-3.3
> ---
>
> Key: HADOOP-18730
> URL: https://issues.apache.org/jira/browse/HADOOP-18730
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #5615: HDFS-16998. RBF: Add ops metrics for getSlowDatanodeReport in RouterClientActivity

2023-05-04 Thread via GitHub


goiri merged PR #5615:
URL: https://github.com/apache/hadoop/pull/5615


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5403: YARN-11340. [Federation] Improve SQLFederationStateStore DataSource Config.

2023-05-04 Thread via GitHub


slfan1989 commented on PR #5403:
URL: https://github.com/apache/hadoop/pull/5403#issuecomment-1534985923

   @goiri Can you help to review this pr? Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5616: YARN-11477. [Federation] MemoryFederationStateStore Support Store ApplicationSubmitData.

2023-05-04 Thread via GitHub


slfan1989 commented on PR #5616:
URL: https://github.com/apache/hadoop/pull/5616#issuecomment-1534983094

   @goiri Can you help to review this pr? Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5552: HDFS-16979. RBF: Add dfsrouter port in hdfsauditlog

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5552:
URL: https://github.com/apache/hadoop/pull/5552#issuecomment-1534947516

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  0s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  15m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   4m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   7m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  16m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  15m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 46s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   7m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 29s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 229m 50s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5552/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  21m 56s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5552/11/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  3s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 475m 41s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.namenode.ha.TestObserverNode |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5552/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5552 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux def343e76b56 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3632f46166aa0fe58e836e9823b126e4c7fe5903 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 

[jira] [Commented] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719370#comment-17719370
 ] 

ASF GitHub Bot commented on HADOOP-18706:
-

hadoop-yetus commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1534918500

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/9/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 1 new + 2 unchanged - 0 fixed 
= 3 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 100m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5563 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1a37e146dbe8 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4f41cf1d0d6427d59e42b58a94b333b326938b83 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/9/testReport/ |
   | Max. process+thread count | 612 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5563: HADOOP-18706: Improve S3ABlockOutputStream recovery

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1534918500

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/9/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 1 new + 2 unchanged - 0 fixed 
= 3 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 100m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5563 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1a37e146dbe8 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4f41cf1d0d6427d59e42b58a94b333b326938b83 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/9/testReport/ |
   | Max. process+thread count | 612 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5563/9/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message 

[GitHub] [hadoop] GauthamBanasandra opened a new pull request, #5618: HADOOP-18729. Fix mvnsite on Windows 10

2023-05-04 Thread via GitHub


GauthamBanasandra opened a new pull request, #5618:
URL: https://github.com/apache/hadoop/pull/5618

   * The mvnsite step invokes shelldocs script. This fails to run on Windows 
since this script is written in Bash, which Windows can't execute natively.
   * Thus, we must run this script through Bash on Windows.
   
   
   
   ### Description of PR
   * The mvnsite step invokes shelldocs
     script. This fails to run on Windows
     since this script is written in Bash,
     which Windows can't execute natively.
   * Thus, we must run this script through
     Bash on Windows.
   
   ### How was this patch tested?
   Jenkins CI validation.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18729) Fix mvnsite on Windows 10

2023-05-04 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18729:

Labels: pull-request-available  (was: )

> Fix mvnsite on Windows 10
> -
>
> Key: HADOOP-18729
> URL: https://issues.apache.org/jira/browse/HADOOP-18729
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, site
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Critical
>  Labels: pull-request-available
>
> The mvnsite step fails to build on Windows 10 due to the following error -
> [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.3.1:exec 
> (shelldocs) on project hadoop-common: Command execution failed. Cannot run 
> program 
> "C:\hadoop\hadoop-common-project\hadoop-common\..\..\dev-support\bin\shelldocs"
>  (in directory 
> "C:\hadoop\hadoop-common-project\hadoop-common\src\site\markdown"): 
> CreateProcess error=193, %1 is not a valid Win32 application -> [Help 1]
> shelldocs is a bash script which Windows can't execute natively. Thus, we 
> need to run this through bash on Windows 10.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18729) Fix mvnsite on Windows 10

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719352#comment-17719352
 ] 

ASF GitHub Bot commented on HADOOP-18729:
-

GauthamBanasandra opened a new pull request, #5618:
URL: https://github.com/apache/hadoop/pull/5618

   * The mvnsite step invokes shelldocs script. This fails to run on Windows 
since this script is written in Bash, which Windows can't execute natively.
   * Thus, we must run this script through Bash on Windows.
   
   
   
   ### Description of PR
   * The mvnsite step invokes shelldocs
     script. This fails to run on Windows
     since this script is written in Bash,
     which Windows can't execute natively.
   * Thus, we must run this script through
     Bash on Windows.
   
   ### How was this patch tested?
   Jenkins CI validation.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Fix mvnsite on Windows 10
> -
>
> Key: HADOOP-18729
> URL: https://issues.apache.org/jira/browse/HADOOP-18729
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, site
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Critical
>
> The mvnsite step fails to build on Windows 10 due to the following error -
> [ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.3.1:exec 
> (shelldocs) on project hadoop-common: Command execution failed. Cannot run 
> program 
> "C:\hadoop\hadoop-common-project\hadoop-common\..\..\dev-support\bin\shelldocs"
>  (in directory 
> "C:\hadoop\hadoop-common-project\hadoop-common\src\site\markdown"): 
> CreateProcess error=193, %1 is not a valid Win32 application -> [Help 1]
> shelldocs is a bash script which Windows can't execute natively. Thus, we 
> need to run this through bash on Windows 10.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18729) Fix mvnsite on Windows 10

2023-05-04 Thread Gautham Banasandra (Jira)
Gautham Banasandra created HADOOP-18729:
---

 Summary: Fix mvnsite on Windows 10
 Key: HADOOP-18729
 URL: https://issues.apache.org/jira/browse/HADOOP-18729
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, site
Affects Versions: 3.4.0
 Environment: Windows 10
Reporter: Gautham Banasandra
Assignee: Gautham Banasandra


The mvnsite step fails to build on Windows 10 due to the following error -

[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.3.1:exec 
(shelldocs) on project hadoop-common: Command execution failed. Cannot run 
program 
"C:\hadoop\hadoop-common-project\hadoop-common\..\..\dev-support\bin\shelldocs" 
(in directory 
"C:\hadoop\hadoop-common-project\hadoop-common\src\site\markdown"): 
CreateProcess error=193, %1 is not a valid Win32 application -> [Help 1]

shelldocs is a bash script which Windows can't execute natively. Thus, we need 
to run this through bash on Windows 10.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18706) The temporary files for disk-block buffer aren't unique enough to recover partial uploads. 

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719318#comment-17719318
 ] 

ASF GitHub Bot commented on HADOOP-18706:
-

cbevard1 commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1534755287

   Done. Integration tests were run in us-east-2 with the following options:
   
   `-Dparallel-tests -DtestsThreadCount=16 -Dscale`




> The temporary files for disk-block buffer aren't unique enough to recover 
> partial uploads. 
> ---
>
> Key: HADOOP-18706
> URL: https://issues.apache.org/jira/browse/HADOOP-18706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Chris Bevard
>Priority: Minor
>  Labels: pull-request-available
>
> If an application crashes during an S3ABlockOutputStream upload, it's 
> possible to complete the upload if fast.upload.buffer is set to disk by 
> uploading the s3ablock file with putObject as the final part of the multipart 
> upload. If the application has multiple uploads running in parallel though 
> and they're on the same part number when the application fails, then there is 
> no way to determine which file belongs to which object, and recovery of 
> either upload is impossible.
> If the temporary file name for disk buffering included the s3 key, then every 
> partial upload would be recoverable.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cbevard1 commented on pull request #5563: HADOOP-18706: Improve S3ABlockOutputStream recovery

2023-05-04 Thread via GitHub


cbevard1 commented on PR #5563:
URL: https://github.com/apache/hadoop/pull/5563#issuecomment-1534755287

   Done. Integration tests were run in us-east-2 with the following options:
   
   `-Dparallel-tests -DtestsThreadCount=16 -Dscale`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5552: HDFS-16979. RBF: Add dfsrouter port in hdfsauditlog

2023-05-04 Thread via GitHub


ayushtkn commented on code in PR #5552:
URL: https://github.com/apache/hadoop/pull/5552#discussion_r1184989929


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestNamenodeAuditlogWithDFSRouterPort.java:
##
@@ -0,0 +1,91 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster;
+import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.regex.Pattern;
+
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_CALLER_CONTEXT_ENABLED_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_AUDIT_LOG_WITH_REMOTE_PORT_KEY;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * Test namenode auditlog log the dfsrouterPort info when the request is form 
dfsrouter.
+ */
+public class TestNamenodeAuditlogWithDFSRouterPort {
+  private static final Logger LOG = LoggerFactory.getLogger(
+  TestNamenodeAuditlogWithDFSRouterPort.class);
+  private static final Pattern AUDIT_WITH_PORT_PATTERN = Pattern.compile(

Review Comment:
   TestRouterRpc is in hadoop-rbf only, why you need to add hdfs-rbf jar 
hadoop-hdfs?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18688) s3a audit info to include #of items in a DeleteObjects request

2023-05-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719306#comment-17719306
 ] 

Steve Loughran commented on HADOOP-18688:
-

we need to make sure it gets into the referrer header and so into the s3 server 
logs

> s3a audit info to include #of items in a DeleteObjects request
> --
>
> Key: HADOOP-18688
> URL: https://issues.apache.org/jira/browse/HADOOP-18688
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>
> it would be good to find out how many files were deleted in a DeleteObjects 
> call



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LiuGuH commented on a diff in pull request #5552: HDFS-16979. RBF: Add dfsrouter port in hdfsauditlog

2023-05-04 Thread via GitHub


LiuGuH commented on code in PR #5552:
URL: https://github.com/apache/hadoop/pull/5552#discussion_r1184952734


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestNamenodeAuditlogWithDFSRouterPort.java:
##
@@ -0,0 +1,91 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster;
+import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import org.apache.hadoop.hdfs.server.namenode.FSNamesystem;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.regex.Pattern;
+
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_CALLER_CONTEXT_ENABLED_KEY;
+import static 
org.apache.hadoop.hdfs.DFSConfigKeys.DFS_NAMENODE_AUDIT_LOG_WITH_REMOTE_PORT_KEY;
+import static org.junit.Assert.assertTrue;
+
+/**
+ * Test namenode auditlog log the dfsrouterPort info when the request is form 
dfsrouter.
+ */
+public class TestNamenodeAuditlogWithDFSRouterPort {
+  private static final Logger LOG = LoggerFactory.getLogger(
+  TestNamenodeAuditlogWithDFSRouterPort.class);
+  private static final Pattern AUDIT_WITH_PORT_PATTERN = Pattern.compile(

Review Comment:
   In my environment, there is a maven conflict if I add dependence 
hadoop-hdfs-rbf test jar in hadoop-hdfs .  The error detail is  { java: Modules 
hadoop-yarn-server-tests and hadoop-yarn-server-resourcemanager must have the 
same 'additional command line parameters' specified because of cyclic 
dependencies between them } .  I confused 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17017) S3A client retries on SSL Auth exceptions triggered by "." bucket names

2023-05-04 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719296#comment-17719296
 ] 

Steve Loughran commented on HADOOP-17017:
-

looks like an aws sdk message. in fact, looks like a 308 came back from amazon.

# this is completely different, so please create a new jira with full stack 
trace.
# as you are the only person to encounter this, you get the homework of 
debugging it. 
# logging the aws sdk and HTTP requests will get you started there 
https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/troubleshooting_s3a.html#Enabling_low-level_logging
# and cloudstore library's storediag command will print out and try to 
interpret the s3a config https://github.com/steveloughran/cloudstore
# finally, consider searching for the error string in your favourite search tool

once you have found the cause, please submit a PR for the troubleshooting_s3a 
file so the next person to hit it can make progress faster.




> S3A client retries on SSL Auth exceptions triggered by "." bucket names
> ---
>
> Key: HADOOP-17017
> URL: https://issues.apache.org/jira/browse/HADOOP-17017
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Priority: Minor
>
> If you have a "." in bucket names (it's allowed!) then virtual host HTTPS 
> connections fail with a  java.net.ssl exception. Except we retry and the 
> inner cause is wrapped by generic "client exceptions"
> I'm not going to try and be clever about fixing this, but we should
> * make sure that the inner exception is raised up
> * avoid retries
> * document it in the troubleshooting page. 
> * if there is a well known public "." bucket (cloudera has some:)) we can test
> I get a vague suspicion the AWS SDK is retrying too. Not much we can do there.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18727) Fix WriteOperations.listMultipartUploads function description

2023-05-04 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719290#comment-17719290
 ] 

ASF GitHub Bot commented on HADOOP-18727:
-

steveloughran merged PR #5613:
URL: https://github.com/apache/hadoop/pull/5613




> Fix WriteOperations.listMultipartUploads function description
> -
>
> Key: HADOOP-18727
> URL: https://issues.apache.org/jira/browse/HADOOP-18727
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Affects Versions: 3.3.2, 3.3.3, 3.3.4
>Reporter: Dongjoon Hyun
>Priority: Trivial
>  Labels: pull-request-available
>
> {code:java}
>    /**
> -   * Abort multipart uploads under a path: limited to the first
> -   * few hundred.
> -   * @param prefix prefix for uploads to abort
> -   * @return a count of aborts
> -   * @throws IOException trouble; FileNotFoundExceptions are swallowed.
> +   * List in-progress multipart uploads under a path
> +   * @param prefix prefix for uploads to list
> +   * @return a list of in-progress multipart uploads
> +   * @throws IOException on problems
>     */
>    List listMultipartUploads(String prefix) {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #5613: HADOOP-18727. Fix `WriteOperations.listMultipartUploads` function description

2023-05-04 Thread via GitHub


steveloughran merged PR #5613:
URL: https://github.com/apache/hadoop/pull/5613


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18671) Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem

2023-05-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-18671:
---

Assignee: Tak-Lon (Stephen) Wu

> Add recoverLease(), setSafeMode(), isFileClosed() APIs to FileSystem
> 
>
> Key: HADOOP-18671
> URL: https://issues.apache.org/jira/browse/HADOOP-18671
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Wei-Chiu Chuang
>Assignee: Tak-Lon (Stephen) Wu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> We are in the midst of enabling HBase and Solr to run on Ozone.
> An obstacle is that HBase relies heavily on HDFS APIs and semantics for its 
> Write Ahead Log (WAL) file (similarly, for Solr's transaction log). We 
> propose to push up these HDFS APIs, i.e. recoverLease(), setSafeMode(), 
> isFileClosed() to FileSystem abstraction so that HBase and other applications 
> do not need to take on Ozone dependency at compile time. This work will 
> (hopefully) enable HBase to run on other storage system implementations in 
> the future.
> There are other HDFS features that HBase uses, including hedged read and 
> favored nodes. Those are FS-specific optimizations and are not critical to 
> enable HBase on Ozone.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5617: YARN-11487. WebAppProxyServlet log output request ip

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5617:
URL: https://github.com/apache/hadoop/pull/5617#issuecomment-1534576078

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 12s |  |  hadoop-yarn-server-web-proxy in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  88m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5617/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 59d3faa8c4d6 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6ad0e2883489f1d8063e5e2f8f574ebc8454943a |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5617/2/testReport/ |
   | Max. process+thread count | 695 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 
U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 
|
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5617/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5552: HDFS-16979. RBF: Add dfsrouter port in hdfsauditlog

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5552:
URL: https://github.com/apache/hadoop/pull/5552#issuecomment-1534552305

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  3s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  15m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   4m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   7m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  16m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |  15m 40s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 54s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   7m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 20s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 227m 52s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5552/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  22m 24s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5552/10/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  2s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 473m 21s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | 
hadoop.hdfs.server.federation.router.TestRouterRPCMultipleDestinationMountTableResolver
 |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5552/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5552 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 48816cbfe49a 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 09f0212350d04aa0742ea2a29eb7473c2dd55d26 |
   | Default Java | Private 

[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5552: HDFS-16979. RBF: Add dfsrouter port in hdfsauditlog

2023-05-04 Thread via GitHub


ayushtkn commented on code in PR #5552:
URL: https://github.com/apache/hadoop/pull/5552#discussion_r1184829824


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/CallerContext.java:
##
@@ -50,7 +50,8 @@ public final class CallerContext {
   public static final String CLIENT_ID_STR = "clientId";
   public static final String CLIENT_CALL_ID_STR = "clientCallId";
   public static final String REAL_USER_STR = "realUser";
-
+  public static final String IS_DFSROUTER = "isDfsRouter";
+  public static final String DFSROUTER_PORT_STR = "dfsRouterPort";

Review Comment:
   There is no point having dfsRouterPort or is isDfsRouter, the code is very 
generic and adding Router specific code should be the last thing to do in the 
Namenode, Namenode doesn't care  whether router is proxying or any xyz client 
or service is proxying.
   
   Change it to be generic Proxy Client port nothing exactly router specific in 
the Namenode



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brumi1024 merged pull request #5614: YARN-11463. Node Labels root directory creation doesn't have a retry logic - addendum

2023-05-04 Thread via GitHub


brumi1024 merged PR #5614:
URL: https://github.com/apache/hadoop/pull/5614


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5614: YARN-11463. Node Labels root directory creation doesn't have a retry logic - addendum

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5614:
URL: https://github.com/apache/hadoop/pull/5614#issuecomment-1534437253

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 50s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   5m 24s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  96m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5614/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5614 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8d07886868e5 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7bc1a81307731d9496d5e00f453780739d17ef94 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5614/2/testReport/ |
   | Max. process+thread count | 763 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5614/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5617: YARN-11487. WebAppProxyServlet log output request ip

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5617:
URL: https://github.com/apache/hadoop/pull/5617#issuecomment-1534412516

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  32m  0s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5617/1/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 12s |  |  hadoop-yarn-server-web-proxy in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  85m  1s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5617/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux d71268c20f4e 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f30ac254bbfe6b92d4b77024757cb5067b6e9b43 |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5617/1/testReport/ |
   | Max. process+thread count | 556 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 
U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy 
|
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5617/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered 

[jira] [Updated] (HADOOP-18687) Remove unnecessary dependency on json-smart

2023-05-04 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18687:

Hadoop Flags: Incompatible change
Release Note: json-smart is no longer dependency of the hadoop-auth module 
(it not required) so is not exported transitively as a dependency or included 
in hadoop releases. If application code requires this on the classpath, a 
version must be added to the classpath explicitly -you get to choose which one

> Remove unnecessary dependency on json-smart
> ---
>
> Key: HADOOP-18687
> URL: https://issues.apache.org/jira/browse/HADOOP-18687
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: auth
>Affects Versions: 3.4.0, 3.3.5
>Reporter: Michiel de Jong
>Assignee: Michiel de Jong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> hadoop-auth has a dependency on net.minidev:json-smart 2.4.7, but this 
> dependency is never used.
> This dependency was originally included because the transitive dependency 
> that nimbus-jose-jwt had did not work properly (see 
> https://issues.apache.org/jira/browse/HADOOP-14903). Since version 9.* 
> nimbus-jose-jwt is using its own shaded version of json-smart, so the version 
> declared in hadoop-auth is never actually used.
> json-smart 2.4.7 shows up in CVE scans for CVE-2023-1370. It is still used as 
> a transitive dependency in hadoop-hdfs



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cxzl25 opened a new pull request, #5617: YARN-11487. WebAppProxyServlet log output request ip

2023-05-04 Thread via GitHub


cxzl25 opened a new pull request, #5617:
URL: https://github.com/apache/hadoop/pull/5617

   ### Description of PR
   Output the request ip of `WebAppProxyServlet` in the log.
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5616: YARN-11477. [Federation] MemoryFederationStateStore Support Store ApplicationSubmitData.

2023-05-04 Thread via GitHub


hadoop-yetus commented on PR #5616:
URL: https://github.com/apache/hadoop/pull/5616#issuecomment-1534246578

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 57s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   2m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 25s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  javac  |   1m 59s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 16s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 29s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 112m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5616/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5616 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 52bacff8d86b 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 45ca8cb7b38ed6ae6aefe0709fda12f52c596bee |
   | Default Java | Private Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.18+10-post-Ubuntu-0ubuntu120.04.1
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u362-ga-0ubuntu1~20.04.1-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5616/1/testReport/ |
   | Max. process+thread count | 555 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server |
   | 

[GitHub] [hadoop] LiuGuH commented on a diff in pull request #5552: HDFS-16979. RBF: Add dfsrouter port in hdfsauditlog

2023-05-04 Thread via GitHub


LiuGuH commented on code in PR #5552:
URL: https://github.com/apache/hadoop/pull/5552#discussion_r1184664573


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -453,15 +453,15 @@ private void logAuditEvent(boolean succeeded,
 
   private void appendClientPortToCallerContextIfAbsent() {
 final CallerContext ctx = CallerContext.getCurrent();
-if (isClientPortInfoAbsent(ctx)) {
-  String origContext = ctx == null ? null : ctx.getContext();
-  byte[] origSignature = ctx == null ? null : ctx.getSignature();
-  CallerContext.setCurrent(
-  new CallerContext.Builder(origContext, contextFieldSeparator)
-  .append(CallerContext.CLIENT_PORT_STR, 
String.valueOf(Server.getRemotePort()))
-  .setSignature(origSignature)
-  .build());
-}
+String origContext = ctx == null ? null : ctx.getContext();
+byte[] origSignature = ctx == null ? null : ctx.getSignature();
+String clientPort = isClientPortInfoAbsent(ctx) ? 
CallerContext.CLIENT_PORT_STR :

Review Comment:
   I add via isdfsrouter now.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17017) S3A client retries on SSL Auth exceptions triggered by "." bucket names

2023-05-04 Thread Smith Cruise (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17719178#comment-17719178
 ] 

Smith Cruise commented on HADOOP-17017:
---

When I set 

fs.s3a.path.style.access
true
Enable S3 path style access ie disabling the default virtual 
hosting behaviour.
Useful for S3A-compliant storage providers as it removes the need to set up DNS 
for virtual hosting.



it will return a new error: Unable to parse ExceptionName: PermanentRedirect 
Message: The bucket you are attempting to access must be addressed using the 
specified endpoint. Please send all future requests to this endpoint.:file = 
s3://test.name.with.dot/another.test/foo.parquet

 

 

Is there any progress to solve this problem?

> S3A client retries on SSL Auth exceptions triggered by "." bucket names
> ---
>
> Key: HADOOP-17017
> URL: https://issues.apache.org/jira/browse/HADOOP-17017
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
>Reporter: Steve Loughran
>Priority: Minor
>
> If you have a "." in bucket names (it's allowed!) then virtual host HTTPS 
> connections fail with a  java.net.ssl exception. Except we retry and the 
> inner cause is wrapped by generic "client exceptions"
> I'm not going to try and be clever about fixing this, but we should
> * make sure that the inner exception is raised up
> * avoid retries
> * document it in the troubleshooting page. 
> * if there is a well known public "." bucket (cloudera has some:)) we can test
> I get a vague suspicion the AWS SDK is retrying too. Not much we can do there.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org