[jira] [Work logged] (HADOOP-16965) Introduce StreamContext for Abfs Input and Output streams.

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16965?focusedWorklogId=756833=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756833
 ]

ASF GitHub Bot logged work on HADOOP-16965:
---

Author: ASF GitHub Bot
Created on: 14/Apr/22 04:58
Start Date: 14/Apr/22 04:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4171:
URL: https://github.com/apache/hadoop/pull/4171#issuecomment-1098709796

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-2.10 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  17m 26s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  branch-2.10 passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  branch-2.10 passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  branch-2.10 passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  branch-2.10 passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 15s |  |  branch-2.10 passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 56s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 13s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4171/2/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  40m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4171/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4171 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux b688739c1292 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-2.10 / 2ddf4583de6e3d75be3b46274703c2d1d6ed59df |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-7-amd64:Azul Systems, 
Inc.-1.7.0_262-b10 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4171: HADOOP-16965. Refactor abfs stream configuration. (#1956)

2022-04-13 Thread GitBox


hadoop-yetus commented on PR #4171:
URL: https://github.com/apache/hadoop/pull/4171#issuecomment-1098709796

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-2.10 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  17m 26s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  branch-2.10 passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  branch-2.10 passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  branch-2.10 passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  branch-2.10 passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 15s |  |  branch-2.10 passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 56s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 13s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4171/2/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  40m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4171/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4171 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux b688739c1292 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-2.10 / 2ddf4583de6e3d75be3b46274703c2d1d6ed59df |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-7-amd64:Azul Systems, 
Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4171/2/testReport/ |
   | Max. process+thread count | 263 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4171/2/console |
   | versions 

[jira] [Commented] (HADOOP-15513) Add additional test cases to cover some corner cases for FileUtil#symlink

2022-04-13 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17522052#comment-17522052
 ] 

Masatake Iwasaki commented on HADOOP-15513:
---

cherry-picked this to branch-2.10 for the ease of backporting other patches.

> Add additional test cases to cover some corner cases for FileUtil#symlink
> -
>
> Key: HADOOP-15513
> URL: https://issues.apache.org/jira/browse/HADOOP-15513
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Fix For: 3.2.0, 2.10.2
>
> Attachments: HADOOP-15513.v1.patch, HADOOP-15513.v2.patch
>
>
> Add additional test cases to cover some corner cases for FileUtil#symlink.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15513) Add additional test cases to cover some corner cases for FileUtil#symlink

2022-04-13 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-15513:
--
Fix Version/s: 2.10.2

> Add additional test cases to cover some corner cases for FileUtil#symlink
> -
>
> Key: HADOOP-15513
> URL: https://issues.apache.org/jira/browse/HADOOP-15513
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Major
> Fix For: 3.2.0, 2.10.2
>
> Attachments: HADOOP-15513.v1.patch, HADOOP-15513.v2.patch
>
>
> Add additional test cases to cover some corner cases for FileUtil#symlink.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16965) Introduce StreamContext for Abfs Input and Output streams.

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16965?focusedWorklogId=756831=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756831
 ]

ASF GitHub Bot logged work on HADOOP-16965:
---

Author: ASF GitHub Bot
Created on: 14/Apr/22 04:12
Start Date: 14/Apr/22 04:12
Worklog Time Spent: 10m 
  Work Description: arjun4084346 commented on PR #4171:
URL: https://github.com/apache/hadoop/pull/4171#issuecomment-1098691154

   Hi @mukund-thakur , this is my first pull request in apache hadoop and thank 
you for the review. I am trying to backport fix for 
https://issues.apache.org/jira/browse/HADOOP-17215 
(https://github.com/apache/hadoop/pull/2246). In order to cleanly cherry pick 
cherry pick this commit from branch-3.3 to branch-2.10 , I need to cherry pick 
several other commits before I pick this one. The commit being picked in this 
PR is one of those several commits.




Issue Time Tracking
---

Worklog Id: (was: 756831)
Time Spent: 50m  (was: 40m)

> Introduce StreamContext for Abfs Input and Output streams.
> --
>
> Key: HADOOP-16965
> URL: https://issues.apache.org/jira/browse/HADOOP-16965
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> The number of configuration keeps growing in AbfsOutputStream and 
> AbfsInputStream as we keep on adding new features. It is time to refactor the 
> configurations in a separate class like StreamContext and pass them around. 
> This is will improve the readability of code and reduce cherry-pick-backport 
> pain. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arjun4084346 commented on pull request #4171: HADOOP-16965. Refactor abfs stream configuration. (#1956)

2022-04-13 Thread GitBox


arjun4084346 commented on PR #4171:
URL: https://github.com/apache/hadoop/pull/4171#issuecomment-1098691154

   Hi @mukund-thakur , this is my first pull request in apache hadoop and thank 
you for the review. I am trying to backport fix for 
https://issues.apache.org/jira/browse/HADOOP-17215 
(https://github.com/apache/hadoop/pull/2246). In order to cleanly cherry pick 
cherry pick this commit from branch-3.3 to branch-2.10 , I need to cherry pick 
several other commits before I pick this one. The commit being picked in this 
PR is one of those several commits.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4170: HDFS-16540 Data locality is lost when DataNode pod restarts in kubern…

2022-04-13 Thread GitBox


hadoop-yetus commented on PR #4170:
URL: https://github.com/apache/hadoop/pull/4170#issuecomment-1098680455

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m  3s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 341m 40s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 474m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4170/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4170 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 98cf76256475 4.15.0-162-generic #170-Ubuntu SMP Mon Oct 18 
11:38:05 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1316ff0eada1e29dec8ca56ab266c9bcbe60051c |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4170/1/testReport/ |
   | Max. process+thread count | 2175 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4170/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hadoop] liubingxing commented on pull request #4167: HDFS-16538. EC decoding failed due to not enough valid inputs

2022-04-13 Thread GitBox


liubingxing commented on PR #4167:
URL: https://github.com/apache/hadoop/pull/4167#issuecomment-1098678251

   I will add a UT later


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cndaimin commented on pull request #4077: HDFS-16509. Fix decommission UnsupportedOperationException

2022-04-13 Thread GitBox


cndaimin commented on PR #4077:
URL: https://github.com/apache/hadoop/pull/4077#issuecomment-1098677337

   @Hexiaoqiao I think it would be better to backport. Thanks @Hexiaoqiao 
@jojochuang @tomscut 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] leosunli commented on pull request #4158: HDFS-16535. SlotReleaser should reuse the domain socket based on socket paths

2022-04-13 Thread GitBox


leosunli commented on PR #4158:
URL: https://github.com/apache/hadoop/pull/4158#issuecomment-1098672455

   @jojochuang  thank for your reminder, I will review this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18104) Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18104?focusedWorklogId=756821=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756821
 ]

ASF GitHub Bot logged work on HADOOP-18104:
---

Author: ASF GitHub Bot
Created on: 14/Apr/22 03:25
Start Date: 14/Apr/22 03:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #3964:
URL: https://github.com/apache/hadoop/pull/3964#issuecomment-1098671565

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 13s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ feature-vectored-io Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 17s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 53s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  compile  |  22m 29s |  |  feature-vectored-io passed 
with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  19m 43s |  |  feature-vectored-io passed 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   3m 52s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  mvnsite  |   2m 41s |  |  feature-vectored-io passed  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  feature-vectored-io passed 
with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 32s |  |  feature-vectored-io passed 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m  4s |  |  feature-vectored-io passed  
|
   | +1 :green_heart: |  shadedclient  |  20m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  21m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  19m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 37s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3964/4/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 9 unchanged - 0 fixed = 11 total (was 9)  
|
   | +1 :green_heart: |  mvnsite  |   2m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  25m  7s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 35s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 242m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3964/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3964 |
   | Optional Tests | dupname asflicense mvnsite codespell markdownlint compile 
javac javadoc mvninstall unit shadedclient spotbugs checkstyle |
   | uname | Linux f148d98dabd6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | feature-vectored-io / 
dfe002fe8f3f31532f99cc2aa827d3c0c821f830 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3964: HADOOP-18104: S3A: Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads

2022-04-13 Thread GitBox


hadoop-yetus commented on PR #3964:
URL: https://github.com/apache/hadoop/pull/3964#issuecomment-1098671565

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 13s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ feature-vectored-io Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 17s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 53s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  compile  |  22m 29s |  |  feature-vectored-io passed 
with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  19m 43s |  |  feature-vectored-io passed 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   3m 52s |  |  feature-vectored-io 
passed  |
   | +1 :green_heart: |  mvnsite  |   2m 41s |  |  feature-vectored-io passed  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  feature-vectored-io passed 
with JDK Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 32s |  |  feature-vectored-io passed 
with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m  4s |  |  feature-vectored-io passed  
|
   | +1 :green_heart: |  shadedclient  |  20m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  21m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  19m 39s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 37s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3964/4/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 9 unchanged - 0 fixed = 11 total (was 9)  
|
   | +1 :green_heart: |  mvnsite  |   2m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  25m  7s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 35s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 242m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3964/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3964 |
   | Optional Tests | dupname asflicense mvnsite codespell markdownlint compile 
javac javadoc mvninstall unit shadedclient spotbugs checkstyle |
   | uname | Linux f148d98dabd6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | feature-vectored-io / 
dfe002fe8f3f31532f99cc2aa827d3c0c821f830 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3964/4/testReport/ |
   | Max. process+thread count | 1261 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4127: HDFS-13522. RBF: Support observer node from Router-Based Federation

2022-04-13 Thread GitBox


hadoop-yetus commented on PR #4127:
URL: https://github.com/apache/hadoop/pull/4127#issuecomment-1098665361

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 14s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  21m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 12s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   5m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  10m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  24m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  26m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |  26m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m  2s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/6/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 6 new + 340 unchanged - 1 fixed = 346 total (was 
341)  |
   | +1 :green_heart: |  mvnsite  |   5m 22s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  3s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   3m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   5m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  10m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 46s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 29s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 333m 37s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  32m 58s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 643m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4127 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 13de7b65785b 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fdea891212f0c2d57b950c87695b62964123d219 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4127/6/testReport/ |
   | Max. process+thread count | 3055 (vs. ulimit of 5500) |
   | modules | C: 

[GitHub] [hadoop] Hexiaoqiao commented on pull request #4077: HDFS-16509. Fix decommission UnsupportedOperationException

2022-04-13 Thread GitBox


Hexiaoqiao commented on PR #4077:
URL: https://github.com/apache/hadoop/pull/4077#issuecomment-1098665168

   Committed to trunk. Thanks @cndaimin for your contributions. Thanks 
@jojochuang @tomscut for your reviews.
   BTW, @cndaimin would you mind to check if we need to backport to 
branch-3.3/branch-3.2, Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao merged pull request #4077: HDFS-16509. Fix decommission UnsupportedOperationException

2022-04-13 Thread GitBox


Hexiaoqiao merged PR #4077:
URL: https://github.com/apache/hadoop/pull/4077


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17718) Explicitly set locale in the Dockerfile

2022-04-13 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17522030#comment-17522030
 ] 

Masatake Iwasaki commented on HADOOP-17718:
---

cherry-picked this for releasing 2.10.2.

> Explicitly set locale in the Dockerfile
> ---
>
> Key: HADOOP-17718
> URL: https://issues.apache.org/jira/browse/HADOOP-17718
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 2.10.2, 3.2.3
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When producing the RC bits for 3.3.1, the releasedocmaker step failed.
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (releasedocs) @ hadoop-common ---
> Traceback (most recent call last):
>   File 
> "/build/source/patchprocess/apache-yetus-0.13.0/bin/../lib/releasedocmaker/releasedocmaker.py",
>  line 25, in 
> releasedocmaker.main()
>   File 
> "/build/source/patchprocess/apache-yetus-0.13.0/lib/releasedocmaker/releasedocmaker/__init__.py",
>  line 979, in main
> JIRA_BASE_URL)
>   File 
> "/build/source/patchprocess/apache-yetus-0.13.0/lib/releasedocmaker/releasedocmaker/utils.py",
>  line 199, in write_list
> self.write_key_raw(jira.get_project(), line)
>   File 
> "/build/source/patchprocess/apache-yetus-0.13.0/lib/releasedocmaker/releasedocmaker/utils.py",
>  line 170, in write_key_raw
> self.base.write(input_string)
> UnicodeEncodeError: 'ascii' codec can't encode character '\xdc' in position 
> 71: ordinal not in range(128)
> {noformat}
> It turns out if the script reads jiras containing ascended characters, it 
> can't write the report.
> Inside docker container, the default locale is "ANSI_X3.4-1968". Must set it 
> to utf-8 to support special characters.
> Curious why it wasn't a problem before.
> More details: 
> https://stackoverflow.com/questions/43356982/docker-python-set-utf-8-locale



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17718) Explicitly set locale in the Dockerfile

2022-04-13 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-17718:
--
Fix Version/s: 2.10.2

> Explicitly set locale in the Dockerfile
> ---
>
> Key: HADOOP-17718
> URL: https://issues.apache.org/jira/browse/HADOOP-17718
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 2.10.2, 3.2.3
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> When producing the RC bits for 3.3.1, the releasedocmaker step failed.
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (releasedocs) @ hadoop-common ---
> Traceback (most recent call last):
>   File 
> "/build/source/patchprocess/apache-yetus-0.13.0/bin/../lib/releasedocmaker/releasedocmaker.py",
>  line 25, in 
> releasedocmaker.main()
>   File 
> "/build/source/patchprocess/apache-yetus-0.13.0/lib/releasedocmaker/releasedocmaker/__init__.py",
>  line 979, in main
> JIRA_BASE_URL)
>   File 
> "/build/source/patchprocess/apache-yetus-0.13.0/lib/releasedocmaker/releasedocmaker/utils.py",
>  line 199, in write_list
> self.write_key_raw(jira.get_project(), line)
>   File 
> "/build/source/patchprocess/apache-yetus-0.13.0/lib/releasedocmaker/releasedocmaker/utils.py",
>  line 170, in write_key_raw
> self.base.write(input_string)
> UnicodeEncodeError: 'ascii' codec can't encode character '\xdc' in position 
> 71: ordinal not in range(128)
> {noformat}
> It turns out if the script reads jiras containing ascended characters, it 
> can't write the report.
> Inside docker container, the default locale is "ANSI_X3.4-1968". Must set it 
> to utf-8 to support special characters.
> Curious why it wasn't a problem before.
> More details: 
> https://stackoverflow.com/questions/43356982/docker-python-set-utf-8-locale



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma merged pull request #4138: HDFS-16479. EC: NameNode should not send a reconstruction work when the source datanodes are insufficient

2022-04-13 Thread GitBox


tasanuma merged PR #4138:
URL: https://github.com/apache/hadoop/pull/4138


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma commented on pull request #4138: HDFS-16479. EC: NameNode should not send a reconstruction work when the source datanodes are insufficient

2022-04-13 Thread GitBox


tasanuma commented on PR #4138:
URL: https://github.com/apache/hadoop/pull/4138#issuecomment-1098645791

   @ayushtkn Thanks for your review! I'll merge it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #4158: HDFS-16535. SlotReleaser should reuse the domain socket based on socket paths

2022-04-13 Thread GitBox


jojochuang commented on PR #4158:
URL: https://github.com/apache/hadoop/pull/4158#issuecomment-1098644389

   @leosunli is this something you'd be interested in reviewing?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16965) Introduce StreamContext for Abfs Input and Output streams.

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16965?focusedWorklogId=756805=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756805
 ]

ASF GitHub Bot logged work on HADOOP-16965:
---

Author: ASF GitHub Bot
Created on: 14/Apr/22 02:11
Start Date: 14/Apr/22 02:11
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on PR #4171:
URL: https://github.com/apache/hadoop/pull/4171#issuecomment-1098640276

   Hey! Could you please update more details on why this is getting backported 
to branch-2.10. Thanks




Issue Time Tracking
---

Worklog Id: (was: 756805)
Time Spent: 40m  (was: 0.5h)

> Introduce StreamContext for Abfs Input and Output streams.
> --
>
> Key: HADOOP-16965
> URL: https://issues.apache.org/jira/browse/HADOOP-16965
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> The number of configuration keeps growing in AbfsOutputStream and 
> AbfsInputStream as we keep on adding new features. It is time to refactor the 
> configurations in a separate class like StreamContext and pass them around. 
> This is will improve the readability of code and reduce cherry-pick-backport 
> pain. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #4171: HADOOP-16965. Refactor abfs stream configuration. (#1956)

2022-04-13 Thread GitBox


mukund-thakur commented on PR #4171:
URL: https://github.com/apache/hadoop/pull/4171#issuecomment-1098640276

   Hey! Could you please update more details on why this is getting backported 
to branch-2.10. Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #4087: HDFS-16513. [SBN read] Observer Namenode should not trigger the edits rolling of active Namenode

2022-04-13 Thread GitBox


tomscut commented on PR #4087:
URL: https://github.com/apache/hadoop/pull/4087#issuecomment-1098634304

   In summary, at this stage, should we disable OBN triggerActiveLogRoll first, 
or disable all SNN triggerActiveLogRoll directly? 
   
   @xkrogen @sunchao I look forward to your discussion. Thanks a lot.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #4087: HDFS-16513. [SBN read] Observer Namenode should not trigger the edits rolling of active Namenode

2022-04-13 Thread GitBox


tomscut commented on PR #4087:
URL: https://github.com/apache/hadoop/pull/4087#issuecomment-1098630345

   Thank you @xkrogen for your detailed explanation. I left out some 
information. You are right.
   
   I thought it was ANN automatic rolledits feature first, then discuss whether 
to let SNN trigger ANN to rolledits. I got the order of the two wrong.
   
   And I thought that "if the active NN is not rolling its logs periodically" 
meant that the configuration cycle is very large, or that EditLogTailerThread 
exits because of some UnknowException. As a result, ANN cannot normally roll 
its logs. Let SNN trigger ANN to roll edits, just to add another layer of 
assurance. I made a mistake here.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16965) Introduce StreamContext for Abfs Input and Output streams.

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16965?focusedWorklogId=756791=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756791
 ]

ASF GitHub Bot logged work on HADOOP-16965:
---

Author: ASF GitHub Bot
Created on: 14/Apr/22 01:08
Start Date: 14/Apr/22 01:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4171:
URL: https://github.com/apache/hadoop/pull/4171#issuecomment-1098613705

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  20m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-2.10 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  17m 57s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  branch-2.10 passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  branch-2.10 passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  branch-2.10 passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  branch-2.10 passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 19s |  |  branch-2.10 passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 56s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 15s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4171/1/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  50m 56s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4171/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4171 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 1b16b7376d5e 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-2.10 / dba3321099d720f7b79b45032822d5cdf46d576d |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-7-amd64:Azul Systems, 
Inc.-1.7.0_262-b10 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4171: HADOOP-16965. Refactor abfs stream configuration. (#1956)

2022-04-13 Thread GitBox


hadoop-yetus commented on PR #4171:
URL: https://github.com/apache/hadoop/pull/4171#issuecomment-1098613705

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  20m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-2.10 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  17m 57s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  branch-2.10 passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  branch-2.10 passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  branch-2.10 passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  branch-2.10 passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  branch-2.10 passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 19s |  |  branch-2.10 passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Azul Systems, Inc.-1.7.0_262-b10  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07  |
   | +1 :green_heart: |  spotbugs  |   0m 56s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 15s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4171/1/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  50m 56s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.azure.TestNativeAzureFileSystemFileNameCheck |
   |   | hadoop.fs.azure.TestOutOfBandAzureBlobOperations |
   |   | hadoop.fs.azure.TestWasbFsck |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemContractMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemMocked |
   |   | hadoop.fs.azure.TestBlobMetadata |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemOperationsMocked |
   |   | hadoop.fs.azure.TestNativeAzureFileSystemConcurrency |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4171/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4171 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 1b16b7376d5e 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-2.10 / dba3321099d720f7b79b45032822d5cdf46d576d |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/zulu-7-amd64:Azul Systems, 
Inc.-1.7.0_262-b10 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~18.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4171/1/testReport/ |
   | Max. process+thread count | 230 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4171/1/console |
   | versions 

[jira] [Work logged] (HADOOP-16965) Introduce StreamContext for Abfs Input and Output streams.

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16965?focusedWorklogId=756788=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756788
 ]

ASF GitHub Bot logged work on HADOOP-16965:
---

Author: ASF GitHub Bot
Created on: 14/Apr/22 00:36
Start Date: 14/Apr/22 00:36
Worklog Time Spent: 10m 
  Work Description: raymondlam12 commented on code in PR #4171:
URL: https://github.com/apache/hadoop/pull/4171#discussion_r849995252


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java:
##
@@ -435,9 +450,7 @@ public OutputStream openFileForWrite(final Path path, final 
boolean overwrite) t
 client,
 AbfsHttpConstants.FORWARD_SLASH + getRelativePath(path),
 offset,
-abfsConfiguration.getWriteBufferSize(),
-abfsConfiguration.isFlushEnabled(),
-abfsConfiguration.isOutputStreamFlushDisabled());
+populateAbfsOutputStreamContext());

Review Comment:
   Fix the indentation 

Issue Time Tracking
---

Worklog Id: (was: 756788)
Time Spent: 20m  (was: 10m)

> Introduce StreamContext for Abfs Input and Output streams.
> --
>
> Key: HADOOP-16965
> URL: https://issues.apache.org/jira/browse/HADOOP-16965
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The number of configuration keeps growing in AbfsOutputStream and 
> AbfsInputStream as we keep on adding new features. It is time to refactor the 
> configurations in a separate class like StreamContext and pass them around. 
> This is will improve the readability of code and reduce cherry-pick-backport 
> pain. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] raymondlam12 commented on a diff in pull request #4171: HADOOP-16965. Refactor abfs stream configuration. (#1956)

2022-04-13 Thread GitBox


raymondlam12 commented on code in PR #4171:
URL: https://github.com/apache/hadoop/pull/4171#discussion_r849995252


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java:
##
@@ -435,9 +450,7 @@ public OutputStream openFileForWrite(final Path path, final 
boolean overwrite) t
 client,
 AbfsHttpConstants.FORWARD_SLASH + getRelativePath(path),
 offset,
-abfsConfiguration.getWriteBufferSize(),
-abfsConfiguration.isFlushEnabled(),
-abfsConfiguration.isOutputStreamFlushDisabled());
+populateAbfsOutputStreamContext());

Review Comment:
   Fix the indentation -- it seems like it's being changed from the previous 
format here (you're using 8spaces for parameters on new lines vs 4spaces 
previously) 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18088) Replace log4j 1.x with reload4j

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18088?focusedWorklogId=756787=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756787
 ]

ASF GitHub Bot logged work on HADOOP-18088:
---

Author: ASF GitHub Bot
Created on: 14/Apr/22 00:33
Start Date: 14/Apr/22 00:33
Worklog Time Spent: 10m 
  Work Description: iwasakims merged PR #4151:
URL: https://github.com/apache/hadoop/pull/4151




Issue Time Tracking
---

Worklog Id: (was: 756787)
Time Spent: 7.5h  (was: 7h 20m)

> Replace log4j 1.x with reload4j
> ---
>
> Key: HADOOP-18088
> URL: https://issues.apache.org/jira/browse/HADOOP-18088
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.4, 3.3.4
>
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> As proposed in the dev mailing list 
> (https://lists.apache.org/thread/fdzkv80mzkf3w74z9120l0k0rc3v7kqk) let's 
> replace log4j 1 with reload4j in the maintenance releases (i.e. 3.3.x, 3.2.x 
> and 2.10.x)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18088) Replace log4j 1.x with reload4j

2022-04-13 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-18088:
--
Fix Version/s: 2.10.2

> Replace log4j 1.x with reload4j
> ---
>
> Key: HADOOP-18088
> URL: https://issues.apache.org/jira/browse/HADOOP-18088
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.10.2, 3.2.4, 3.3.4
>
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> As proposed in the dev mailing list 
> (https://lists.apache.org/thread/fdzkv80mzkf3w74z9120l0k0rc3v7kqk) let's 
> replace log4j 1 with reload4j in the maintenance releases (i.e. 3.3.x, 3.2.x 
> and 2.10.x)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims merged pull request #4151: HADOOP-18088. Replace log4j 1.x with reload4j.

2022-04-13 Thread GitBox


iwasakims merged PR #4151:
URL: https://github.com/apache/hadoop/pull/4151


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16104) Wasb tests to downgrade to skip when test a/c is namespace enabled

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16104?focusedWorklogId=756786=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756786
 ]

ASF GitHub Bot logged work on HADOOP-16104:
---

Author: ASF GitHub Bot
Created on: 14/Apr/22 00:29
Start Date: 14/Apr/22 00:29
Worklog Time Spent: 10m 
  Work Description: raymondlam12 commented on PR #4137:
URL: https://github.com/apache/hadoop/pull/4137#issuecomment-1098597501

   +1 on this cherry pick 




Issue Time Tracking
---

Worklog Id: (was: 756786)
Time Spent: 0.5h  (was: 20m)

> Wasb tests to downgrade to skip when test a/c is namespace enabled
> --
>
> Key: HADOOP-16104
> URL: https://issues.apache.org/jira/browse/HADOOP-16104
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0, 3.2.1
>
> Attachments: HADOOP-16104.001.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When you run the abfs tests with a namespace-enabled accounts, all the wasb 
> tests fail "don't yet work with namespace-enabled accounts". This should be 
> downgraded to a test skip, somehow



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] raymondlam12 commented on pull request #4137: HADOOP-16104. Wasb tests to downgrade to skip when test a/c is namesp…

2022-04-13 Thread GitBox


raymondlam12 commented on PR #4137:
URL: https://github.com/apache/hadoop/pull/4137#issuecomment-1098597501

   +1 on this cherry pick 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16965) Introduce StreamContext for Abfs Input and Output streams.

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-16965:

Labels: pull-request-available  (was: )

> Introduce StreamContext for Abfs Input and Output streams.
> --
>
> Key: HADOOP-16965
> URL: https://issues.apache.org/jira/browse/HADOOP-16965
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The number of configuration keeps growing in AbfsOutputStream and 
> AbfsInputStream as we keep on adding new features. It is time to refactor the 
> configurations in a separate class like StreamContext and pass them around. 
> This is will improve the readability of code and reduce cherry-pick-backport 
> pain. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16965) Introduce StreamContext for Abfs Input and Output streams.

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16965?focusedWorklogId=756784=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756784
 ]

ASF GitHub Bot logged work on HADOOP-16965:
---

Author: ASF GitHub Bot
Created on: 14/Apr/22 00:16
Start Date: 14/Apr/22 00:16
Worklog Time Spent: 10m 
  Work Description: arjun4084346 opened a new pull request, #4171:
URL: https://github.com/apache/hadoop/pull/4171

   Contributed by Mukund Thakur.
   
   (cherry picked from commit 8031c66295b530dcaae9e00d4f656330bc3b3952)
   
   
   
   ### Description of PR
   It is an almost clean cherry pick of commit 
8031c66295b530dcaae9e00d4f656330bc3b3952 
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




Issue Time Tracking
---

Worklog Id: (was: 756784)
Remaining Estimate: 0h
Time Spent: 10m

> Introduce StreamContext for Abfs Input and Output streams.
> --
>
> Key: HADOOP-16965
> URL: https://issues.apache.org/jira/browse/HADOOP-16965
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
> Fix For: 3.4.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The number of configuration keeps growing in AbfsOutputStream and 
> AbfsInputStream as we keep on adding new features. It is time to refactor the 
> configurations in a separate class like StreamContext and pass them around. 
> This is will improve the readability of code and reduce cherry-pick-backport 
> pain. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] arjun4084346 opened a new pull request, #4171: HADOOP-16965. Refactor abfs stream configuration. (#1956)

2022-04-13 Thread GitBox


arjun4084346 opened a new pull request, #4171:
URL: https://github.com/apache/hadoop/pull/4171

   Contributed by Mukund Thakur.
   
   (cherry picked from commit 8031c66295b530dcaae9e00d4f656330bc3b3952)
   
   
   
   ### Description of PR
   It is an almost clean cherry pick of commit 
8031c66295b530dcaae9e00d4f656330bc3b3952 
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] simbadzina commented on a diff in pull request #4127: HDFS-13522. RBF: Support observer node from Router-Based Federation

2022-04-13 Thread GitBox


simbadzina commented on code in PR #4127:
URL: https://github.com/apache/hadoop/pull/4127#discussion_r849678932


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java:
##
@@ -349,6 +349,18 @@ public static ClientProtocol 
createProxyWithAlignmentContext(
   boolean withRetries, AtomicBoolean fallbackToSimpleAuth,
   AlignmentContext alignmentContext)
   throws IOException {
+if (!conf.getBoolean(HdfsClientConfigKeys.DFS_OBSERVER_READ_ENABLE,

Review Comment:
   Hi @goiri I'm going to be away for a few weeks so I can't do the split soon. 
If it's a must have, I can do it when I'm back. Or if anybody has bandwidth 
they can help out.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18104) Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18104?focusedWorklogId=756715=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756715
 ]

ASF GitHub Bot logged work on HADOOP-18104:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 21:21
Start Date: 13/Apr/22 21:21
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on code in PR #3964:
URL: https://github.com/apache/hadoop/pull/3964#discussion_r849909748


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractVectoredRead.java:
##
@@ -19,15 +19,23 @@
 package org.apache.hadoop.fs.contract.s3a;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FileRange;
 import org.apache.hadoop.fs.FileRangeImpl;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.s3a.Constants;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
 import java.util.ArrayList;
 import java.util.List;
 
+import org.junit.Test;

Review Comment:
   Not sure how it got messed up. Sorry





Issue Time Tracking
---

Worklog Id: (was: 756715)
Time Spent: 2h  (was: 1h 50m)

> Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads
> 
>
> Key: HADOOP-18104
> URL: https://issues.apache.org/jira/browse/HADOOP-18104
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a diff in pull request #3964: HADOOP-18104: S3A: Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads

2022-04-13 Thread GitBox


mukund-thakur commented on code in PR #3964:
URL: https://github.com/apache/hadoop/pull/3964#discussion_r849909748


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractVectoredRead.java:
##
@@ -19,15 +19,23 @@
 package org.apache.hadoop.fs.contract.s3a;
 
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
 import org.apache.hadoop.fs.FileRange;
 import org.apache.hadoop.fs.FileRangeImpl;
 import org.apache.hadoop.fs.FileSystem;
 import org.apache.hadoop.fs.contract.AbstractContractVectoredReadTest;
 import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.fs.s3a.Constants;
+import org.apache.hadoop.fs.s3a.S3AFileSystem;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
 
 import java.util.ArrayList;
 import java.util.List;
 
+import org.junit.Test;

Review Comment:
   Not sure how it got messed up. Sorry



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18104) Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18104?focusedWorklogId=756700=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756700
 ]

ASF GitHub Bot logged work on HADOOP-18104:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 21:05
Start Date: 13/Apr/22 21:05
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on code in PR #3964:
URL: https://github.com/apache/hadoop/pull/3964#discussion_r849898485


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AReadOpContext.java:
##
@@ -69,17 +75,19 @@
* @param changeDetectionPolicy change detection policy.
* @param readahead readahead for GET operations/skip, etc.
* @param auditSpan active audit
+   * @param vectoredIOContext
*/
   public S3AReadOpContext(
-  final Path path,
-  Invoker invoker,
-  @Nullable FileSystem.Statistics stats,
-  S3AStatisticsContext instrumentation,
-  FileStatus dstFileStatus,
-  S3AInputPolicy inputPolicy,
-  ChangeDetectionPolicy changeDetectionPolicy,
-  final long readahead,
-  final AuditSpan auditSpan) {
+  final Path path,

Review Comment:
   updated not sure how it got changed.





Issue Time Tracking
---

Worklog Id: (was: 756700)
Time Spent: 1h 50m  (was: 1h 40m)

> Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads
> 
>
> Key: HADOOP-18104
> URL: https://issues.apache.org/jira/browse/HADOOP-18104
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a diff in pull request #3964: HADOOP-18104: S3A: Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads

2022-04-13 Thread GitBox


mukund-thakur commented on code in PR #3964:
URL: https://github.com/apache/hadoop/pull/3964#discussion_r849898485


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AReadOpContext.java:
##
@@ -69,17 +75,19 @@
* @param changeDetectionPolicy change detection policy.
* @param readahead readahead for GET operations/skip, etc.
* @param auditSpan active audit
+   * @param vectoredIOContext
*/
   public S3AReadOpContext(
-  final Path path,
-  Invoker invoker,
-  @Nullable FileSystem.Statistics stats,
-  S3AStatisticsContext instrumentation,
-  FileStatus dstFileStatus,
-  S3AInputPolicy inputPolicy,
-  ChangeDetectionPolicy changeDetectionPolicy,
-  final long readahead,
-  final AuditSpan auditSpan) {
+  final Path path,

Review Comment:
   updated not sure how it got changed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18104) Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18104?focusedWorklogId=756695=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756695
 ]

ASF GitHub Bot logged work on HADOOP-18104:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 20:59
Start Date: 13/Apr/22 20:59
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on code in PR #3964:
URL: https://github.com/apache/hadoop/pull/3964#discussion_r849894430


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1066,4 +1066,29 @@ private Constants() {
* Require that all S3 access is made through Access Points.
*/
   public static final String AWS_S3_ACCESSPOINT_REQUIRED = 
"fs.s3a.accesspoint.required";
+
+  /**
+   * What is the smallest reasonable seek that we should group ranges
+   * together during vectored read operation.
+   * Value : {@value}.
+   */
+  public static final String AWS_S3_MIN_SEEK_VECTOR_READS = 
"fs.s3a.min.seek.vectored.read";
+
+  /**
+   * What is the largest size that we should group ranges
+   * together during vectored read?

Review Comment:
   we can say merged.size?





Issue Time Tracking
---

Worklog Id: (was: 756695)
Time Spent: 1h 40m  (was: 1.5h)

> Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads
> 
>
> Key: HADOOP-18104
> URL: https://issues.apache.org/jira/browse/HADOOP-18104
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a diff in pull request #3964: HADOOP-18104: S3A: Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads

2022-04-13 Thread GitBox


mukund-thakur commented on code in PR #3964:
URL: https://github.com/apache/hadoop/pull/3964#discussion_r849894430


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1066,4 +1066,29 @@ private Constants() {
* Require that all S3 access is made through Access Points.
*/
   public static final String AWS_S3_ACCESSPOINT_REQUIRED = 
"fs.s3a.accesspoint.required";
+
+  /**
+   * What is the smallest reasonable seek that we should group ranges
+   * together during vectored read operation.
+   * Value : {@value}.
+   */
+  public static final String AWS_S3_MIN_SEEK_VECTOR_READS = 
"fs.s3a.min.seek.vectored.read";
+
+  /**
+   * What is the largest size that we should group ranges
+   * together during vectored read?

Review Comment:
   we can say merged.size?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18104) Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18104?focusedWorklogId=756687=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756687
 ]

ASF GitHub Bot logged work on HADOOP-18104:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 20:51
Start Date: 13/Apr/22 20:51
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on code in PR #3964:
URL: https://github.com/apache/hadoop/pull/3964#discussion_r849889242


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1066,4 +1066,29 @@ private Constants() {
* Require that all S3 access is made through Access Points.
*/
   public static final String AWS_S3_ACCESSPOINT_REQUIRED = 
"fs.s3a.accesspoint.required";
+
+  /**
+   * What is the smallest reasonable seek that we should group ranges
+   * together during vectored read operation.
+   * Value : {@value}.
+   */
+  public static final String AWS_S3_MIN_SEEK_VECTOR_READS = 
"fs.s3a.min.seek.vectored.read";

Review Comment:
   Yeah you are right





Issue Time Tracking
---

Worklog Id: (was: 756687)
Time Spent: 1.5h  (was: 1h 20m)

> Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads
> 
>
> Key: HADOOP-18104
> URL: https://issues.apache.org/jira/browse/HADOOP-18104
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, fs
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on a diff in pull request #3964: HADOOP-18104: S3A: Add configs to configure minSeekForVectorReads and maxReadSizeForVectorReads

2022-04-13 Thread GitBox


mukund-thakur commented on code in PR #3964:
URL: https://github.com/apache/hadoop/pull/3964#discussion_r849889242


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1066,4 +1066,29 @@ private Constants() {
* Require that all S3 access is made through Access Points.
*/
   public static final String AWS_S3_ACCESSPOINT_REQUIRED = 
"fs.s3a.accesspoint.required";
+
+  /**
+   * What is the smallest reasonable seek that we should group ranges
+   * together during vectored read operation.
+   * Value : {@value}.
+   */
+  public static final String AWS_S3_MIN_SEEK_VECTOR_READS = 
"fs.s3a.min.seek.vectored.read";

Review Comment:
   Yeah you are right



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] huaxiangsun opened a new pull request, #4170: HDFS-16540 Data locality is lost when DataNode pod restarts in kubern…

2022-04-13 Thread GitBox


huaxiangsun opened a new pull request, #4170:
URL: https://github.com/apache/hadoop/pull/4170

   …etes
   
   
   
   ### Description of PR
   When Dn with the same uuid is registered with a different ip, 
host2DatanodeMap needs to be updated accordingly.
   
   ### How was this patch tested?
   Tested 3.3.2 with the patch on a eks cluster, restarted the pod hosting 
DataNode and HBase region server. After that, doing a major compaction of Hbase 
region, made sure that locality is kept.
   
   There is also a new unittest case added.
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18193) Support nested mount points in INodeTree

2022-04-13 Thread Lei Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Yang updated HADOOP-18193:
--
Description: 
Defining following client mount table config is not supported in  INodeTree and 
will throw FileAlreadyExistsException

fs.viewfs.mounttable.link./foo/bar=hdfs://nn1/foo/bar

fs.viewfs.mounttable.link./foo=hdfs://nn02/foo

 

INodeTree has 2 methods that need change to support nested mount points.

createLink(..)

resolve(..)

 

ViewFileSystem and ViewFs referes INodeTree.resolve(..) to resolve path to 
specific mount point. No changes are expected in both classes. However, we need 
to support existing use cases and make sure no regression.

 

AC:
 # INodeTree.createlink should support creating nested mount points.(INodeTree 
is constructed during fs init)
 # INodeTree.resolve should support resolve path based on nested mount points. 
(INodeTree.resolve is used in viewfs apis)
 # No regression in existing ViewFileSystem and ViewFs apis.
 # Ensure some important apis are not broken with nested mount points. (Rename, 
getContentSummary, listStatus...)

  was:
Defining following client mount table config is not supported in  INodeTree and 
will throw FileAlreadyExistsException

fs.viewfs.mounttable.link./foo/bar=hdfs://nn1/foo/bar

fs.viewfs.mounttable.link./foo=hdfs://nn02/foo

 

INodeTree has 2 methods that need change to support nested mount points.

createLink(..)

resolve(..)

 

ViewFileSystem and ViewFs referes INodeTree.resolve(..) to resolve path to 
specific mount point. No changes are expected in both classes. 


> Support nested mount points in INodeTree
> 
>
> Key: HADOOP-18193
> URL: https://issues.apache.org/jira/browse/HADOOP-18193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: viewfs
>Affects Versions: 2.10.0
>Reporter: Lei Yang
>Priority: Major
>
> Defining following client mount table config is not supported in  INodeTree 
> and will throw FileAlreadyExistsException
> fs.viewfs.mounttable.link./foo/bar=hdfs://nn1/foo/bar
> fs.viewfs.mounttable.link./foo=hdfs://nn02/foo
>  
> INodeTree has 2 methods that need change to support nested mount points.
> createLink(..)
> resolve(..)
>  
> ViewFileSystem and ViewFs referes INodeTree.resolve(..) to resolve path to 
> specific mount point. No changes are expected in both classes. However, we 
> need to support existing use cases and make sure no regression.
>  
> AC:
>  # INodeTree.createlink should support creating nested mount 
> points.(INodeTree is constructed during fs init)
>  # INodeTree.resolve should support resolve path based on nested mount 
> points. (INodeTree.resolve is used in viewfs apis)
>  # No regression in existing ViewFileSystem and ViewFs apis.
>  # Ensure some important apis are not broken with nested mount points. 
> (Rename, getContentSummary, listStatus...)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18193) Support nested mount points in INodeTree

2022-04-13 Thread Lei Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Yang updated HADOOP-18193:
--
Description: 
Defining following client mount table config is not supported in  INodeTree and 
will throw FileAlreadyExistsException

fs.viewfs.mounttable.link./foo/bar=hdfs://nn1/foo/bar

fs.viewfs.mounttable.link./foo=hdfs://nn02/foo

 

INodeTree has 2 methods that need change to support nested mount points.

createLink(..): build INodeTree during fs init.

resolve(..): resolve path in INodeTree with viewfs apis.

 

ViewFileSystem and ViewFs referes INodeTree.resolve(..) to resolve path to 
specific mount point. No changes are expected in both classes. However, we need 
to support existing use cases and make sure no regression.

 

AC:
 # INodeTree.createlink should support creating nested mount points.(INodeTree 
is constructed during fs init)
 # INodeTree.resolve should support resolve path based on nested mount points. 
(INodeTree.resolve is used in viewfs apis)
 # No regression in existing ViewFileSystem and ViewFs apis.
 # Ensure some important apis are not broken with nested mount points. (Rename, 
getContentSummary, listStatus...)

  was:
Defining following client mount table config is not supported in  INodeTree and 
will throw FileAlreadyExistsException

fs.viewfs.mounttable.link./foo/bar=hdfs://nn1/foo/bar

fs.viewfs.mounttable.link./foo=hdfs://nn02/foo

 

INodeTree has 2 methods that need change to support nested mount points.

createLink(..)

resolve(..)

 

ViewFileSystem and ViewFs referes INodeTree.resolve(..) to resolve path to 
specific mount point. No changes are expected in both classes. However, we need 
to support existing use cases and make sure no regression.

 

AC:
 # INodeTree.createlink should support creating nested mount points.(INodeTree 
is constructed during fs init)
 # INodeTree.resolve should support resolve path based on nested mount points. 
(INodeTree.resolve is used in viewfs apis)
 # No regression in existing ViewFileSystem and ViewFs apis.
 # Ensure some important apis are not broken with nested mount points. (Rename, 
getContentSummary, listStatus...)


> Support nested mount points in INodeTree
> 
>
> Key: HADOOP-18193
> URL: https://issues.apache.org/jira/browse/HADOOP-18193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: viewfs
>Affects Versions: 2.10.0
>Reporter: Lei Yang
>Priority: Major
>
> Defining following client mount table config is not supported in  INodeTree 
> and will throw FileAlreadyExistsException
> fs.viewfs.mounttable.link./foo/bar=hdfs://nn1/foo/bar
> fs.viewfs.mounttable.link./foo=hdfs://nn02/foo
>  
> INodeTree has 2 methods that need change to support nested mount points.
> createLink(..): build INodeTree during fs init.
> resolve(..): resolve path in INodeTree with viewfs apis.
>  
> ViewFileSystem and ViewFs referes INodeTree.resolve(..) to resolve path to 
> specific mount point. No changes are expected in both classes. However, we 
> need to support existing use cases and make sure no regression.
>  
> AC:
>  # INodeTree.createlink should support creating nested mount 
> points.(INodeTree is constructed during fs init)
>  # INodeTree.resolve should support resolve path based on nested mount 
> points. (INodeTree.resolve is used in viewfs apis)
>  # No regression in existing ViewFileSystem and ViewFs apis.
>  # Ensure some important apis are not broken with nested mount points. 
> (Rename, getContentSummary, listStatus...)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18193) Support nested mount points in INodeTree

2022-04-13 Thread Lei Yang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Yang updated HADOOP-18193:
--
Description: 
Defining following client mount table config is not supported in  INodeTree and 
will throw FileAlreadyExistsException

fs.viewfs.mounttable.link./foo/bar=hdfs://nn1/foo/bar

fs.viewfs.mounttable.link./foo=hdfs://nn02/foo

 

INodeTree has 2 methods that need change to support nested mount points.

createLink(..)

resolve(..)

 

ViewFileSystem and ViewFs referes INodeTree.resolve(..) to resolve path to 
specific mount point. No changes are expected in both classes. 

  was:
Defining following client mount table config is not supported in  INodeTree and 
will throw FileAlreadyExistsException

fs.viewfs.mounttable.link./foo/bar=hdfs://nn1/foo/bar

fs.viewfs.mounttable.link./foo=hdfs://nn02/foo

 


> Support nested mount points in INodeTree
> 
>
> Key: HADOOP-18193
> URL: https://issues.apache.org/jira/browse/HADOOP-18193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: viewfs
>Affects Versions: 2.10.0
>Reporter: Lei Yang
>Priority: Major
>
> Defining following client mount table config is not supported in  INodeTree 
> and will throw FileAlreadyExistsException
> fs.viewfs.mounttable.link./foo/bar=hdfs://nn1/foo/bar
> fs.viewfs.mounttable.link./foo=hdfs://nn02/foo
>  
> INodeTree has 2 methods that need change to support nested mount points.
> createLink(..)
> resolve(..)
>  
> ViewFileSystem and ViewFs referes INodeTree.resolve(..) to resolve path to 
> specific mount point. No changes are expected in both classes. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] prasad-acit commented on pull request #4162: HDFS-16526. Add metrics for slow DataNode

2022-04-13 Thread GitBox


prasad-acit commented on PR #4162:
URL: https://github.com/apache/hadoop/pull/4162#issuecomment-1098314022

   UT failures are not related to the code changes.
   @hemanthboyina / @Hexiaoqiao  can you plz review the MR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop-site] aajisaka commented on a diff in pull request #38: Add details of CVE-2022-26612

2022-04-13 Thread GitBox


aajisaka commented on code in PR #38:
URL: https://github.com/apache/hadoop-site/pull/38#discussion_r849725599


##
src/cve_list.md:
##
@@ -37,6 +37,21 @@ One paragraph summary goes here. Don't need nuts-and-bolts 
detail, just enough f
 - **Issue Announced**:
 -->
 
+## 
[CVE-2022-26612](http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-26612) 
Arbitrary file write during untar on Windows
+
+In Apache Hadoop, The `unTar` function uses `unTarUsingJava` function on 
Windows and the built-in tar utility on Unix and other OSes.  As a result, a 
TAR entry may create a symlink under the expected extraction directory which 
points to an external directory. A subsequent TAR entry may extract an 
arbitrary file into the external directory using the symlink name. This however 
would be caught by the same `targetDirPath` check on Unix because of the 
`getCanonicalPath` call. However on Windows, `getCanonicalPath` doesn't resolve 
symbolic links, which bypasses the check.  `unpackEntries` during TAR 
extraction follows symbolic links which allows writing outside expected base 
directory on Windows.
+
+Users of the affected versions should apply either of the following 
mitigations:
+* Do not run any of the YARN daemons as a user possessing the permissions to 
create symlinks on Windows.
+* Do not use symlinks in the tar file.
+
+- **Versions affected**: Versions below 3.2.3, 3.3.1, 3.3.2
+- **Fixed versions**: 3.2.3, 3.4 onwards

Review Comment:
   Though 3.3.3 is not currently released, I think we can add the 3.3.3 version 
because the information is already public in 
https://issues.apache.org/jira/browse/HADOOP-18198



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop-site] aajisaka commented on a diff in pull request #38: Add details of CVE-2022-26612

2022-04-13 Thread GitBox


aajisaka commented on code in PR #38:
URL: https://github.com/apache/hadoop-site/pull/38#discussion_r849725599


##
src/cve_list.md:
##
@@ -37,6 +37,21 @@ One paragraph summary goes here. Don't need nuts-and-bolts 
detail, just enough f
 - **Issue Announced**:
 -->
 
+## 
[CVE-2022-26612](http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-26612) 
Arbitrary file write during untar on Windows
+
+In Apache Hadoop, The `unTar` function uses `unTarUsingJava` function on 
Windows and the built-in tar utility on Unix and other OSes.  As a result, a 
TAR entry may create a symlink under the expected extraction directory which 
points to an external directory. A subsequent TAR entry may extract an 
arbitrary file into the external directory using the symlink name. This however 
would be caught by the same `targetDirPath` check on Unix because of the 
`getCanonicalPath` call. However on Windows, `getCanonicalPath` doesn't resolve 
symbolic links, which bypasses the check.  `unpackEntries` during TAR 
extraction follows symbolic links which allows writing outside expected base 
directory on Windows.
+
+Users of the affected versions should apply either of the following 
mitigations:
+* Do not run any of the YARN daemons as a user possessing the permissions to 
create symlinks on Windows.
+* Do not use symlinks in the tar file.
+
+- **Versions affected**: Versions below 3.2.3, 3.3.1, 3.3.2
+- **Fixed versions**: 3.2.3, 3.4 onwards

Review Comment:
   Though 3.3.3 is not currently released, I think we can add the 3.3.3 version 
because the information is already public in HADOOP-18198



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop-site] aajisaka commented on a diff in pull request #38: Add details of CVE-2022-26612

2022-04-13 Thread GitBox


aajisaka commented on code in PR #38:
URL: https://github.com/apache/hadoop-site/pull/38#discussion_r849725599


##
src/cve_list.md:
##
@@ -37,6 +37,21 @@ One paragraph summary goes here. Don't need nuts-and-bolts 
detail, just enough f
 - **Issue Announced**:
 -->
 
+## 
[CVE-2022-26612](http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-26612) 
Arbitrary file write during untar on Windows
+
+In Apache Hadoop, The `unTar` function uses `unTarUsingJava` function on 
Windows and the built-in tar utility on Unix and other OSes.  As a result, a 
TAR entry may create a symlink under the expected extraction directory which 
points to an external directory. A subsequent TAR entry may extract an 
arbitrary file into the external directory using the symlink name. This however 
would be caught by the same `targetDirPath` check on Unix because of the 
`getCanonicalPath` call. However on Windows, `getCanonicalPath` doesn't resolve 
symbolic links, which bypasses the check.  `unpackEntries` during TAR 
extraction follows symbolic links which allows writing outside expected base 
directory on Windows.
+
+Users of the affected versions should apply either of the following 
mitigations:
+* Do not run any of the YARN daemons as a user possessing the permissions to 
create symlinks on Windows.
+* Do not use symlinks in the tar file.
+
+- **Versions affected**: Versions below 3.2.3, 3.3.1, 3.3.2
+- **Fixed versions**: 3.2.3, 3.4 onwards

Review Comment:
   Though 3.3.3 is not currently released, I think we can add the 3.3.3 version.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop-site] aajisaka commented on a diff in pull request #38: Add details of CVE-2022-26612

2022-04-13 Thread GitBox


aajisaka commented on code in PR #38:
URL: https://github.com/apache/hadoop-site/pull/38#discussion_r849724777


##
src/cve_list.md:
##
@@ -37,6 +37,21 @@ One paragraph summary goes here. Don't need nuts-and-bolts 
detail, just enough f
 - **Issue Announced**:
 -->
 
+## 
[CVE-2022-26612](http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-26612) 
Arbitrary file write during untar on Windows
+
+In Apache Hadoop, The `unTar` function uses `unTarUsingJava` function on 
Windows and the built-in tar utility on Unix and other OSes.  As a result, a 
TAR entry may create a symlink under the expected extraction directory which 
points to an external directory. A subsequent TAR entry may extract an 
arbitrary file into the external directory using the symlink name. This however 
would be caught by the same `targetDirPath` check on Unix because of the 
`getCanonicalPath` call. However on Windows, `getCanonicalPath` doesn't resolve 
symbolic links, which bypasses the check.  `unpackEntries` during TAR 
extraction follows symbolic links which allows writing outside expected base 
directory on Windows.
+
+Users of the affected versions should apply either of the following 
mitigations:
+* Do not run any of the YARN daemons as a user possessing the permissions to 
create symlinks on Windows.
+* Do not use symlinks in the tar file.
+
+- **Versions affected**: Versions below 3.2.3, 3.3.1, 3.3.2

Review Comment:
   3.3.1 looks redundant. Can be removed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop-site] GauthamBanasandra commented on pull request #38: Add details of CVE-2022-26612

2022-04-13 Thread GitBox


GauthamBanasandra commented on PR #38:
URL: https://github.com/apache/hadoop-site/pull/38#issuecomment-1098294006

   @aajisaka could you please review this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop-site] GauthamBanasandra opened a new pull request, #38: Add details of CVE-2022-26612

2022-04-13 Thread GitBox


GauthamBanasandra opened a new pull request, #38:
URL: https://github.com/apache/hadoop-site/pull/38

   * Added the details of
 CVE-2022-26612 to
 cve_list page.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15983) Remove the usage of jersey-json to remove jackson 1.x dependency.

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15983?focusedWorklogId=756550=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756550
 ]

ASF GitHub Bot logged work on HADOOP-15983:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 17:05
Start Date: 13/Apr/22 17:05
Worklog Time Spent: 10m 
  Work Description: pjfanning commented on PR #3988:
URL: https://github.com/apache/hadoop/pull/3988#issuecomment-1098285849

   @steveloughran one option would be for me to change the package name in my 
variant of jersey-json so that other projects that use hadoop and jersey 1 
wouldn't be affected. From looking around a bit, many other projects that use 
hadoop don't use jersey themselves or use jersey 2 in some cases.




Issue Time Tracking
---

Worklog Id: (was: 756550)
Time Spent: 3h 20m  (was: 3h 10m)

> Remove the usage of jersey-json to remove jackson 1.x dependency.
> -
>
> Key: HADOOP-15983
> URL: https://issues.apache.org/jira/browse/HADOOP-15983
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pjfanning commented on pull request #3988: [HADOOP-15983] use jersey-json that is built to use jackson2

2022-04-13 Thread GitBox


pjfanning commented on PR #3988:
URL: https://github.com/apache/hadoop/pull/3988#issuecomment-1098285849

   @steveloughran one option would be for me to change the package name in my 
variant of jersey-json so that other projects that use hadoop and jersey 1 
wouldn't be affected. From looking around a bit, many other projects that use 
hadoop don't use jersey themselves or use jersey 2 in some cases.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] simbadzina commented on pull request #4127: HDFS-13522. RBF: Support observer node from Router-Based Federation

2022-04-13 Thread GitBox


simbadzina commented on PR #4127:
URL: https://github.com/apache/hadoop/pull/4127#issuecomment-1098268812

   Hi @fengnanli could you please take a look and add folks in your team.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18201) Remove base and bucket overrides for endpoint in ITestS3ARequesterPays.java

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18201?focusedWorklogId=756520=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756520
 ]

ASF GitHub Bot logged work on HADOOP-18201:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 16:43
Start Date: 13/Apr/22 16:43
Worklog Time Spent: 10m 
  Work Description: dannycjones commented on PR #4169:
URL: https://github.com/apache/hadoop/pull/4169#issuecomment-1098267255

   Tested against `s3.eu-west-1.amazonaws.com`, only 
`ITestMarkerTool.testRunLimitedLandsatAudit` fails as expected (see 
HADOOP-18168).




Issue Time Tracking
---

Worklog Id: (was: 756520)
Time Spent: 0.5h  (was: 20m)

> Remove base and bucket overrides for endpoint in ITestS3ARequesterPays.java
> ---
>
> Key: HADOOP-18201
> URL: https://issues.apache.org/jira/browse/HADOOP-18201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: Mehakmeet Singh
>Assignee: Daniel Carl Jones
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> If a user has set the endpoint for their bucket in a test environment, we 
> should ignore that in a test that won't use the bucket that the endpoint is 
> set for. In this case, we are using "s3a://usgs-landsat/" which is in the 
> region us-west-2, and would fail if the user has explicitly set the endpoint 
> to something else.
> Example (I have set the endpoint to ap-south-1):
> {code:java}
> [ERROR] 
> testRequesterPaysDisabledFails(org.apache.hadoop.fs.s3a.ITestS3ARequesterPays)
>   Time elapsed: 9.323 s  <<< ERROR!
> org.apache.hadoop.fs.s3a.AWSRedirectException: getFileStatus on 
> s3a://usgs-landsat/collection02/catalog.json: 
> com.amazonaws.services.s3.model.AmazonS3Exception: The bucket is in this 
> region: us-west-2. Please use this region to retry the request (Service: 
> Amazon S3; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: 
> Z09V8PMEEN5PHDRZ; S3 Extended Request ID: 
> B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=; 
> Proxy: null), S3 Extended Request ID: 
> B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=:301
>  Moved Permanently: The bucket is in this region: us-west-2. Please use this 
> region to retry the request (Service: Amazon S3; Status Code: 301; Error 
> Code: 301 Moved Permanently; Request ID: Z09V8PMEEN5PHDRZ; S3 Extended 
> Request ID: 
> B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=; 
> Proxy: null)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:233)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:171)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3440)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3346)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.extractOrFetchSimpleFileStatus(S3AFileSystem.java:4890)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$executeOpen$6(S3AFileSystem.java:1437)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:543)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:524)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:445)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.executeOpen(S3AFileSystem.java:1435)
>  at org.apache.hadoop.fs.s3a.S3AFileSystem.open(S3AFileSystem.java:1409)
>   
> <..>{code}
> CC: [~ste...@apache.org]  [~mthakur]  [~dannycjones]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dannycjones commented on pull request #4169: HADOOP-18201. Remove endpoint config overrides for ITestS3ARequesterPays

2022-04-13 Thread GitBox


dannycjones commented on PR #4169:
URL: https://github.com/apache/hadoop/pull/4169#issuecomment-1098267255

   Tested against `s3.eu-west-1.amazonaws.com`, only 
`ITestMarkerTool.testRunLimitedLandsatAudit` fails as expected (see 
HADOOP-18168).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] simbadzina commented on a diff in pull request #4127: HDFS-13522. RBF: Support observer node from Router-Based Federation

2022-04-13 Thread GitBox


simbadzina commented on code in PR #4127:
URL: https://github.com/apache/hadoop/pull/4127#discussion_r849678932


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/NameNodeProxiesClient.java:
##
@@ -349,6 +349,18 @@ public static ClientProtocol 
createProxyWithAlignmentContext(
   boolean withRetries, AtomicBoolean fallbackToSimpleAuth,
   AlignmentContext alignmentContext)
   throws IOException {
+if (!conf.getBoolean(HdfsClientConfigKeys.DFS_OBSERVER_READ_ENABLE,

Review Comment:
   Hi @goiri I'm going to be away for a few weeks to do the split. If it's a 
must have, I can do it when I'm back. Or if anybody has bandwidth they can help 
out.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18201) Remove base and bucket overrides for endpoint in ITestS3ARequesterPays.java

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18201?focusedWorklogId=756510=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756510
 ]

ASF GitHub Bot logged work on HADOOP-18201:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 16:25
Start Date: 13/Apr/22 16:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #4169:
URL: https://github.com/apache/hadoop/pull/4169#issuecomment-1098250181

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 102m 38s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4169/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4169 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux a56d57cfe46f 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ebc0d27ca0104190e9c73805f5fc55b888a73cec |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4169/1/testReport/ |
   | Max. process+thread count | 596 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4169/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4169: HADOOP-18201. Remove endpoint config overrides for ITestS3ARequesterPays

2022-04-13 Thread GitBox


hadoop-yetus commented on PR #4169:
URL: https://github.com/apache/hadoop/pull/4169#issuecomment-1098250181

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 102m 38s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4169/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4169 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux a56d57cfe46f 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ebc0d27ca0104190e9c73805f5fc55b888a73cec |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4169/1/testReport/ |
   | Max. process+thread count | 596 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4169/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To 

[GitHub] [hadoop] xkrogen commented on pull request #4087: HDFS-16513. [SBN read] Observer Namenode should not trigger the edits rolling of active Namenode

2022-04-13 Thread GitBox


xkrogen commented on PR #4087:
URL: https://github.com/apache/hadoop/pull/4087#issuecomment-1098241563

   > The pendingDatanodeMessage issue mentioned here strikes me as a bit risky. 
 ...
   
   I'm not following. The issue described from HDFS-2737 says that "if the 
active NN is not rolling its logs periodically ... many datanode messages 
\[will\] be queued up in the PendingDatanodeMessage structure". Certainly it is 
bad if we don't have a way to ensure that the logs are rolled regularly. But 
HDFS-14378 just proposes making the ANN roll its own edit logs, instead of 
relying on the SbNN to roll them. I don't see the risk -- we are still ensuring 
that the logs are rolled periodically, just triggered by the ANN itself instead 
of the SbNN.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16202) Enhance openFile() for better read performance against object stores

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=756479=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756479
 ]

ASF GitHub Bot logged work on HADOOP-16202:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 15:51
Start Date: 13/Apr/22 15:51
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on PR #2584:
URL: https://github.com/apache/hadoop/pull/2584#issuecomment-1098216003

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 19 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 52s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  21m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   7m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   6m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   6m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  12m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  24m 22s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  24m 14s | 
[/results-compile-javac-root-jdkUbuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/22/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04.txt)
 |  root-jdkUbuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 generated 1 new + 1810 unchanged - 0 
fixed = 1811 total (was 1810)  |
   | +1 :green_heart: |  compile  |  21m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  javac  |  21m 33s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/22/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 1 new + 1684 
unchanged - 0 fixed = 1685 total (was 1684)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 59s |  |  root: The patch generated 
0 new + 847 unchanged - 2 fixed = 847 total (was 849)  |
   | +1 :green_heart: |  mvnsite  |   7m 39s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   6m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  hadoop-yarn-common in the 
patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  hadoop-mapreduce-client-core 
in the patch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  hadoop-mapreduce-client-app 
in the patch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2584: HADOOP-16202. Enhance openFile() for better read performance against object stores

2022-04-13 Thread GitBox


hadoop-yetus commented on PR #2584:
URL: https://github.com/apache/hadoop/pull/2584#issuecomment-1098216003

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 19 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 52s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  21m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   4m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   7m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   6m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   6m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |  12m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  24m 22s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | -1 :x: |  javac  |  24m 14s | 
[/results-compile-javac-root-jdkUbuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/22/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04.txt)
 |  root-jdkUbuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 generated 1 new + 1810 unchanged - 0 
fixed = 1811 total (was 1810)  |
   | +1 :green_heart: |  compile  |  21m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  javac  |  21m 33s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2584/22/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.txt)
 |  root-jdkPrivateBuild-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 generated 1 new + 1684 
unchanged - 0 fixed = 1685 total (was 1684)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 59s |  |  root: The patch generated 
0 new + 847 unchanged - 2 fixed = 847 total (was 849)  |
   | +1 :green_heart: |  mvnsite  |   7m 39s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   6m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  hadoop-yarn-common in the 
patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  hadoop-mapreduce-client-core 
in the patch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  hadoop-mapreduce-client-app 
in the patch passed with JDK Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  hadoop-distcp in the patch 
passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  hadoop-mapreduce-examples in 
the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07. 
 |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  hadoop-streaming in the 
patch passed with JDK Private 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4077: HDFS-16509. Fix decommission UnsupportedOperationException

2022-04-13 Thread GitBox


hadoop-yetus commented on PR #4077:
URL: https://github.com/apache/hadoop/pull/4077#issuecomment-1098174309

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 228m 12s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 334m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4077/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4077 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 921b06966301 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b516ffebac3baa6a41c75bb215a154aeb22dbdc3 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4077/6/testReport/ |
   | Max. process+thread count | 3312 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4077/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4166: HDFS-14478: Add libhdfs APIs for openFile

2022-04-13 Thread GitBox


steveloughran commented on PR #4166:
URL: https://github.com/apache/hadoop/pull/4166#issuecomment-1098157004

   failures on branch-3.3 are the same, 
[exec] The following tests FAILED:
[exec]  14 - memcheck_rpc_engine (Failed)
[exec]  34 - memcheck_hdfs_config_connect_bugs (Failed)
[exec]  38 - 
memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static (Failed)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-18201) Remove base and bucket overrides for endpoint in ITestS3ARequesterPays.java

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18201?focusedWorklogId=756440=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756440
 ]

ASF GitHub Bot logged work on HADOOP-18201:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 14:41
Start Date: 13/Apr/22 14:41
Worklog Time Spent: 10m 
  Work Description: dannycjones opened a new pull request, #4169:
URL: https://github.com/apache/hadoop/pull/4169

   ### Description of PR
   
   Requester pays was added in #3962. The new tests remove overrides for 
requester pays enablement but do not account for developers changing the S3 
endpoint.
   
   To address this, we remove the override for endpoint.
   
   Addresses HADOOP-18201.
   
   ### How was this patch tested?
   
   Patch will be tested against both `eu-west-1` and `af-south-1`.
   
   




Issue Time Tracking
---

Worklog Id: (was: 756440)
Remaining Estimate: 0h
Time Spent: 10m

> Remove base and bucket overrides for endpoint in ITestS3ARequesterPays.java
> ---
>
> Key: HADOOP-18201
> URL: https://issues.apache.org/jira/browse/HADOOP-18201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: Mehakmeet Singh
>Assignee: Daniel Carl Jones
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If a user has set the endpoint for their bucket in a test environment, we 
> should ignore that in a test that won't use the bucket that the endpoint is 
> set for. In this case, we are using "s3a://usgs-landsat/" which is in the 
> region us-west-2, and would fail if the user has explicitly set the endpoint 
> to something else.
> Example (I have set the endpoint to ap-south-1):
> {code:java}
> [ERROR] 
> testRequesterPaysDisabledFails(org.apache.hadoop.fs.s3a.ITestS3ARequesterPays)
>   Time elapsed: 9.323 s  <<< ERROR!
> org.apache.hadoop.fs.s3a.AWSRedirectException: getFileStatus on 
> s3a://usgs-landsat/collection02/catalog.json: 
> com.amazonaws.services.s3.model.AmazonS3Exception: The bucket is in this 
> region: us-west-2. Please use this region to retry the request (Service: 
> Amazon S3; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: 
> Z09V8PMEEN5PHDRZ; S3 Extended Request ID: 
> B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=; 
> Proxy: null), S3 Extended Request ID: 
> B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=:301
>  Moved Permanently: The bucket is in this region: us-west-2. Please use this 
> region to retry the request (Service: Amazon S3; Status Code: 301; Error 
> Code: 301 Moved Permanently; Request ID: Z09V8PMEEN5PHDRZ; S3 Extended 
> Request ID: 
> B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=; 
> Proxy: null)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:233)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:171)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3440)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3346)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.extractOrFetchSimpleFileStatus(S3AFileSystem.java:4890)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$executeOpen$6(S3AFileSystem.java:1437)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:543)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:524)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:445)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.executeOpen(S3AFileSystem.java:1435)
>  at org.apache.hadoop.fs.s3a.S3AFileSystem.open(S3AFileSystem.java:1409)
>   
> <..>{code}
> CC: [~ste...@apache.org]  [~mthakur]  [~dannycjones]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18201) Remove base and bucket overrides for endpoint in ITestS3ARequesterPays.java

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18201:

Labels: pull-request-available  (was: )

> Remove base and bucket overrides for endpoint in ITestS3ARequesterPays.java
> ---
>
> Key: HADOOP-18201
> URL: https://issues.apache.org/jira/browse/HADOOP-18201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: Mehakmeet Singh
>Assignee: Daniel Carl Jones
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If a user has set the endpoint for their bucket in a test environment, we 
> should ignore that in a test that won't use the bucket that the endpoint is 
> set for. In this case, we are using "s3a://usgs-landsat/" which is in the 
> region us-west-2, and would fail if the user has explicitly set the endpoint 
> to something else.
> Example (I have set the endpoint to ap-south-1):
> {code:java}
> [ERROR] 
> testRequesterPaysDisabledFails(org.apache.hadoop.fs.s3a.ITestS3ARequesterPays)
>   Time elapsed: 9.323 s  <<< ERROR!
> org.apache.hadoop.fs.s3a.AWSRedirectException: getFileStatus on 
> s3a://usgs-landsat/collection02/catalog.json: 
> com.amazonaws.services.s3.model.AmazonS3Exception: The bucket is in this 
> region: us-west-2. Please use this region to retry the request (Service: 
> Amazon S3; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: 
> Z09V8PMEEN5PHDRZ; S3 Extended Request ID: 
> B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=; 
> Proxy: null), S3 Extended Request ID: 
> B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=:301
>  Moved Permanently: The bucket is in this region: us-west-2. Please use this 
> region to retry the request (Service: Amazon S3; Status Code: 301; Error 
> Code: 301 Moved Permanently; Request ID: Z09V8PMEEN5PHDRZ; S3 Extended 
> Request ID: 
> B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=; 
> Proxy: null)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:233)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:171)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3440)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3346)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.extractOrFetchSimpleFileStatus(S3AFileSystem.java:4890)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$executeOpen$6(S3AFileSystem.java:1437)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:543)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:524)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:445)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.executeOpen(S3AFileSystem.java:1435)
>  at org.apache.hadoop.fs.s3a.S3AFileSystem.open(S3AFileSystem.java:1409)
>   
> <..>{code}
> CC: [~ste...@apache.org]  [~mthakur]  [~dannycjones]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dannycjones opened a new pull request, #4169: HADOOP-18201. Remove endpoint config overrides for ITestS3ARequesterPays

2022-04-13 Thread GitBox


dannycjones opened a new pull request, #4169:
URL: https://github.com/apache/hadoop/pull/4169

   ### Description of PR
   
   Requester pays was added in #3962. The new tests remove overrides for 
requester pays enablement but do not account for developers changing the S3 
endpoint.
   
   To address this, we remove the override for endpoint.
   
   Addresses HADOOP-18201.
   
   ### How was this patch tested?
   
   Patch will be tested against both `eu-west-1` and `af-south-1`.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18201) Remove base and bucket overrides for endpoint in ITestS3ARequesterPays.java

2022-04-13 Thread Daniel Carl Jones (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17521714#comment-17521714
 ] 

Daniel Carl Jones commented on HADOOP-18201:


I'm assuming at the moment that endpoint is the only configuration we're likely 
to run into here. Other configs like CSE is usually set for each bucket.

> Remove base and bucket overrides for endpoint in ITestS3ARequesterPays.java
> ---
>
> Key: HADOOP-18201
> URL: https://issues.apache.org/jira/browse/HADOOP-18201
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.3
>Reporter: Mehakmeet Singh
>Assignee: Daniel Carl Jones
>Priority: Major
>
> If a user has set the endpoint for their bucket in a test environment, we 
> should ignore that in a test that won't use the bucket that the endpoint is 
> set for. In this case, we are using "s3a://usgs-landsat/" which is in the 
> region us-west-2, and would fail if the user has explicitly set the endpoint 
> to something else.
> Example (I have set the endpoint to ap-south-1):
> {code:java}
> [ERROR] 
> testRequesterPaysDisabledFails(org.apache.hadoop.fs.s3a.ITestS3ARequesterPays)
>   Time elapsed: 9.323 s  <<< ERROR!
> org.apache.hadoop.fs.s3a.AWSRedirectException: getFileStatus on 
> s3a://usgs-landsat/collection02/catalog.json: 
> com.amazonaws.services.s3.model.AmazonS3Exception: The bucket is in this 
> region: us-west-2. Please use this region to retry the request (Service: 
> Amazon S3; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: 
> Z09V8PMEEN5PHDRZ; S3 Extended Request ID: 
> B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=; 
> Proxy: null), S3 Extended Request ID: 
> B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=:301
>  Moved Permanently: The bucket is in this region: us-west-2. Please use this 
> region to retry the request (Service: Amazon S3; Status Code: 301; Error 
> Code: 301 Moved Permanently; Request ID: Z09V8PMEEN5PHDRZ; S3 Extended 
> Request ID: 
> B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=; 
> Proxy: null)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:233)
>  at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:171)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3440)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3346)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.extractOrFetchSimpleFileStatus(S3AFileSystem.java:4890)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$executeOpen$6(S3AFileSystem.java:1437)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:543)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:524)
>  at 
> org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:445)
>  at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.executeOpen(S3AFileSystem.java:1435)
>  at org.apache.hadoop.fs.s3a.S3AFileSystem.open(S3AFileSystem.java:1409)
>   
> <..>{code}
> CC: [~ste...@apache.org]  [~mthakur]  [~dannycjones]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brumi1024 closed pull request #4150: YARN-11107:When NodeLabel is enabled for a YARN cluster, AM blacklist…

2022-04-13 Thread GitBox


brumi1024 closed pull request #4150: YARN-11107:When NodeLabel is enabled for a 
YARN cluster, AM blacklist…
URL: https://github.com/apache/hadoop/pull/4150


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] brumi1024 commented on pull request #4150: YARN-11107:When NodeLabel is enabled for a YARN cluster, AM blacklist…

2022-04-13 Thread GitBox


brumi1024 commented on PR #4150:
URL: https://github.com/apache/hadoop/pull/4150#issuecomment-1098089627

   Merged to trunk.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #4141: HDFS-16534. Split FsDatasetImpl from block pool locks to volume grain locks.

2022-04-13 Thread GitBox


Hexiaoqiao commented on PR #4141:
URL: https://github.com/apache/hadoop/pull/4141#issuecomment-1098081845

   LGTM +1 from my side. I would like to check into trunk if no furthermore 
feedback while 5 work days later.
   
   > 1、some method is not good change to volume lock, and if split to volume 
lock it have to get locks and release locks sequence(like acquire lock1, lock2, 
lock3, release lock 3 lock2 lock1).So just acquire block pool lock is enough.
   > 2、some method like contains() is no need to acquire volume lock.Now it 
acquire block pool lock read lock, so it no need to acquire block pool lock 
read lock and volume lock.
   
   I would like to clarify this explanation. This improvement only change part 
of methods from block pool level lock to volume level lock rather all. Because,
   A. No Improvement. Such as for add/remove Volume method, Lock level 0 (block 
pool level lock) is enough and safe to execute logic. Some others is similar.
   or
   B. No Necessary. For some query request, acquire block pool level read lock 
has safe-guarded, so it is not necessary to acquire volume level read lock 
again.
   
   Thanks @MingXiangLi for your works.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18194) Public dataset class for S3A integration tests

2022-04-13 Thread Daniel Carl Jones (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17521696#comment-17521696
 ] 

Daniel Carl Jones commented on HADOOP-18194:


Another edge case that this class could handle - endpoint is overridden and 
requesterpays tests currently fail. Perhaps this class should drop base/bucket 
overrides?
{quote}If a user has set the endpoint for their bucket in a test environment, 
we should ignore that in a test that won't use the bucket that the endpoint is 
set for. In this case, we are using "s3a://usgs-landsat/" which is in the 
region us-west-2, and would fail if the user has explicitly set the endpoint to 
something else.
{quote}

> Public dataset class for S3A integration tests
> --
>
> Key: HADOOP-18194
> URL: https://issues.apache.org/jira/browse/HADOOP-18194
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Daniel Carl Jones
>Assignee: Daniel Carl Jones
>Priority: Minor
>
> Introduction of PublicDatasetTestUtils as proposed previously in some of the 
> ideas for refactoring S3A incrementally. Some of its responsibilities:
> - Source of truth for getting URI based on public data set.
> - Maybe keep the methods specific to their purpose where possible? We might 
> need {{s3a://landsat-pds/scene_list.gz}} specifically for some tests, but 
> other tests may just need a bucket with a bunch of keys.
> - Introduce test assumptions about the S3 endpoint or AWS partition. If we’re 
> not looking at 'aws' partition, skip test.
> How might we make this generic for non-{{aws}} partition S3 or 
> S3API-compatible object stores?
> - Ideally allow for future extension to provide some easy ways to override 
> the bucket if tester has an alternative source? I see 
> "fs.s3a.scale.test.csvfile" already has a little bit of this.
> - We could have something which takes a path to a hadoop XML config file; 
> we'd have a default resource but the maven build could be pointed at another 
> via a command line property. this file could contain all the settings for a 
> test against a partition or internal s3-compatible store



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18201) Remove base and bucket overrides for endpoint in ITestS3ARequesterPays.java

2022-04-13 Thread Mehakmeet Singh (Jira)
Mehakmeet Singh created HADOOP-18201:


 Summary: Remove base and bucket overrides for endpoint in 
ITestS3ARequesterPays.java
 Key: HADOOP-18201
 URL: https://issues.apache.org/jira/browse/HADOOP-18201
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.3.3
Reporter: Mehakmeet Singh
Assignee: Daniel Carl Jones


If a user has set the endpoint for their bucket in a test environment, we 
should ignore that in a test that won't use the bucket that the endpoint is set 
for. In this case, we are using "s3a://usgs-landsat/" which is in the region 
us-west-2, and would fail if the user has explicitly set the endpoint to 
something else.

Example (I have set the endpoint to ap-south-1):
{code:java}
[ERROR] 
testRequesterPaysDisabledFails(org.apache.hadoop.fs.s3a.ITestS3ARequesterPays)  
Time elapsed: 9.323 s  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSRedirectException: getFileStatus on 
s3a://usgs-landsat/collection02/catalog.json: 
com.amazonaws.services.s3.model.AmazonS3Exception: The bucket is in this 
region: us-west-2. Please use this region to retry the request (Service: Amazon 
S3; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: 
Z09V8PMEEN5PHDRZ; S3 Extended Request ID: 
B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=; 
Proxy: null), S3 Extended Request ID: 
B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=:301
 Moved Permanently: The bucket is in this region: us-west-2. Please use this 
region to retry the request (Service: Amazon S3; Status Code: 301; Error Code: 
301 Moved Permanently; Request ID: Z09V8PMEEN5PHDRZ; S3 Extended Request ID: 
B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=; 
Proxy: null)
 at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:233)
 at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:171)
 at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3440)
 at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3346)
 at 
org.apache.hadoop.fs.s3a.S3AFileSystem.extractOrFetchSimpleFileStatus(S3AFileSystem.java:4890)
 at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$executeOpen$6(S3AFileSystem.java:1437)
 at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:543)
 at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:524)
 at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:445)
 at org.apache.hadoop.fs.s3a.S3AFileSystem.executeOpen(S3AFileSystem.java:1435)
 at org.apache.hadoop.fs.s3a.S3AFileSystem.open(S3AFileSystem.java:1409)
  
<..>{code}
CC: [~ste...@apache.org]  [~mthakur]  [~dannycjones]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4150: YARN-11107:When NodeLabel is enabled for a YARN cluster, AM blacklist…

2022-04-13 Thread GitBox


hadoop-yetus commented on PR #4150:
URL: https://github.com/apache/hadoop/pull/4150#issuecomment-1098054302

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  42m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 15s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 39s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4150/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 2 new + 2 unchanged - 0 fixed = 4 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 101m  7s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 208m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4150/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4150 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 1f215f4c3849 4.15.0-163-generic #171-Ubuntu SMP Fri Nov 5 
11:55:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 81ae4f1cda02c45e4c92dc17432f31e5cc3e870e |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4150/2/testReport/ |
   | Max. process+thread count | 925 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4150/2/console |
   

[jira] [Work logged] (HADOOP-17833) Improve Magic Committer Performance

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17833?focusedWorklogId=756382=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756382
 ]

ASF GitHub Bot logged work on HADOOP-17833:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 13:23
Start Date: 13/Apr/22 13:23
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on PR #3289:
URL: https://github.com/apache/hadoop/pull/3289#issuecomment-1098045775

   ```
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/CommitContext.java:348:
  private class PoolSubmitter implements TaskPool.Submitter, Closeable {: Class 
PoolSubmitter should be declared as final. [FinalClass]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/files/PersistentCommitData.java:105:
return serializer.load(fs, path,status);:36: ',' is not followed by 
whitespace. [WhitespaceAfter]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/CreateFileBuilder.java:22:import
 java.util.Collections;:8: Unused import - java.util.Collections. 
[UnusedImports]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/MkdirOperation.java:190:
void createFakeDirectory(final Path dir) throws IOException;:30: Redundant 
'final' modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperationHelper.java:326:
   * {@link S3AFileSystem#finishedWrite(String, long, String, String, 
org.apache.hadoop.fs.s3a.impl.PutObjectOptions)}: Line is longer than 100 
characters (found 118). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestCommitOperationCost.java:256:
  commitOperations.commitOrFail(singleCommit);: 'block' child has 
incorrect indentation level 10, expected level should be 6. [Indentation]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestCommitOperationCost.java:257:
  IOStatistics st = commitOperations.getIOStatistics();: 'block' child 
has incorrect indentation level 10, expected level should be 6. [Indentation]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestCommitOperationCost.java:258:
  return ioStatisticsToPrettyString(st);: 'block' child has incorrect 
indentation level 10, expected level should be 6. [Indentation]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/performance/ITestS3ADeleteCost.java:284:
);: 'method call rparen' has incorrect indentation level 8, expected 
level should be 4. [Indentation]
   
   
   
   Code | Warning
   

Issue Time Tracking
---

Worklog Id: (was: 756382)
Time Spent: 5h 50m  (was: 5h 40m)

> Improve Magic Committer Performance
> ---
>
> Key: HADOOP-17833
> URL: https://issues.apache.org/jira/browse/HADOOP-17833
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Magic committer tasks can be slow because every file created with 
> overwrite=false triggers a HEAD (verify there's no file) and a LIST (that 
> there's no dir). And because of delayed manifestations, it may not behave as 
> expected.
> ParquetOutputFormat is one example of a library which does this.
> we could fix parquet to use overwrite=true, but (a) there may be surprises in 
> other uses (b) it'd still leave the list and (c) do nothing for other formats 
> call
> Proposed: createFile() under a magic path to skip all probes for file/dir at 
> end of path
> Only a single task attempt Will be writing to that directory and it should 
> know what it is doing. If there is conflicting file names and parts across 
> tasks that won't even get picked up at this point. Oh and none of the 
> committers ever check for this: you'll get the last file manifested (s3a) or 
> renamed (file)
> If we skip the checks we will save 2 HTTP requests/file.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3289: HADOOP-17833. Improve Magic Committer performance

2022-04-13 Thread GitBox


steveloughran commented on PR #3289:
URL: https://github.com/apache/hadoop/pull/3289#issuecomment-1098045775

   ```
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/CommitContext.java:348:
  private class PoolSubmitter implements TaskPool.Submitter, Closeable {: Class 
PoolSubmitter should be declared as final. [FinalClass]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/files/PersistentCommitData.java:105:
return serializer.load(fs, path,status);:36: ',' is not followed by 
whitespace. [WhitespaceAfter]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/CreateFileBuilder.java:22:import
 java.util.Collections;:8: Unused import - java.util.Collections. 
[UnusedImports]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/MkdirOperation.java:190:
void createFakeDirectory(final Path dir) throws IOException;:30: Redundant 
'final' modifier. [RedundantModifier]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/WriteOperationHelper.java:326:
   * {@link S3AFileSystem#finishedWrite(String, long, String, String, 
org.apache.hadoop.fs.s3a.impl.PutObjectOptions)}: Line is longer than 100 
characters (found 118). [LineLength]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestCommitOperationCost.java:256:
  commitOperations.commitOrFail(singleCommit);: 'block' child has 
incorrect indentation level 10, expected level should be 6. [Indentation]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestCommitOperationCost.java:257:
  IOStatistics st = commitOperations.getIOStatistics();: 'block' child 
has incorrect indentation level 10, expected level should be 6. [Indentation]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestCommitOperationCost.java:258:
  return ioStatisticsToPrettyString(st);: 'block' child has incorrect 
indentation level 10, expected level should be 6. [Indentation]
   
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/performance/ITestS3ADeleteCost.java:284:
);: 'method call rparen' has incorrect indentation level 8, expected 
level should be 4. [Indentation]
   
   
   
   Code | Warning
   -- | --
   IS | Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.commit.CommitContext.outerSubmitter; locked 60% of time
     | Bug type IS2_INCONSISTENT_SYNC (click for details)In class 
org.apache.hadoop.fs.s3a.commit.CommitContextField 
org.apache.hadoop.fs.s3a.commit.CommitContext.outerSubmitterSynchronized 60% of 
the timeUnsynchronized access at CommitContext.java:[line 291]Unsynchronized 
access at CommitContext.java:[line 170]Synchronized access at 
CommitContext.java:[line 332]Synchronized access at CommitContext.java:[line 
330]Synchronized access at CommitContext.java:[line 332]
   
   
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/AbstractS3ACommitter.java:1442:
 warning: no @throws for java.io.IOException
   
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/MagicCommitIntegration.java:94:
 warning: no @param for trackerStatistics
   
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/files/PersistentCommitData.java:121:
 warning: no @param for path
   
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/commit/magic/MagicCommitTracker.java:80:
 warning: no @param for trackerStatistics
   
   Code Warning
   IS   Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.commit.CommitContext.outerSubmitter; locked 60% of time
   [Bug type IS2_INCONSISTENT_SYNC (click for 
details)](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3289/11/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html#IS2_INCONSISTENT_SYNC)
   In class org.apache.hadoop.fs.s3a.commit.CommitContext
   Field org.apache.hadoop.fs.s3a.commit.CommitContext.outerSubmitter
   Synchronized 60% of the time
   Unsynchronized access at CommitContext.java:[line 291]
   Unsynchronized access at CommitContext.java:[line 170]
   Synchronized access at CommitContext.java:[line 332]
   Synchronized access at CommitContext.java:[line 330]
   Synchronized access at CommitContext.java:[line 332]
   
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16202) Enhance openFile() for better read performance against object stores

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=756381=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756381
 ]

ASF GitHub Bot logged work on HADOOP-16202:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 13:21
Start Date: 13/Apr/22 13:21
Worklog Time Spent: 10m 
  Work Description: dannycjones commented on code in PR #2584:
URL: https://github.com/apache/hadoop/pull/2584#discussion_r849475521


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java:
##
@@ -70,6 +70,10 @@ public Configuration createConfiguration() {
 // use minimum multipart size for faster triggering
 conf.setLong(Constants.MULTIPART_SIZE, MULTIPART_MIN_SIZE);
 conf.setInt(Constants.S3A_BUCKET_PROBE, 1);
+// this is so stream draining is always blocking, allowing
+// assertions to be safely made without worrying
+// about any race conditions
+conf.setInt(ASYNC_DRAIN_THRESHOLD, 128_000);

Review Comment:
   Only after posting this has it clicked - we just want to make sure any 
assertions on the stream are completed after drain? Makes sense.
   
   `Integer.MAX_VALUE` might make it more explicit - I was wondering the 
significance of `128_000`.





Issue Time Tracking
---

Worklog Id: (was: 756381)
Time Spent: 19.5h  (was: 19h 20m)

> Enhance openFile() for better read performance against object stores 
> -
>
> Key: HADOOP-16202
> URL: https://issues.apache.org/jira/browse/HADOOP-16202
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/s3, tools/distcp
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 19.5h
>  Remaining Estimate: 0h
>
> The {{openFile()}} builder API lets us add new options when reading a file
> Add an option {{"fs.s3a.open.option.length"}} which takes a long and allows 
> the length of the file to be declared. If set, *no check for the existence of 
> the file is issued when opening the file*
> Also: withFileStatus() to take any FileStatus implementation, rather than 
> only S3AFileStatus -and not check that the path matches the path being 
> opened. Needed to support viewFS-style wrapping and mounting.
> and Adopt where appropriate to stop clusters with S3A reads switched to 
> random IO from killing download/localization
> * fs shell copyToLocal
> * distcp
> * IOUtils.copy



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dannycjones commented on a diff in pull request #2584: HADOOP-16202. Enhance openFile() for better read performance against object stores

2022-04-13 Thread GitBox


dannycjones commented on code in PR #2584:
URL: https://github.com/apache/hadoop/pull/2584#discussion_r849475521


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java:
##
@@ -70,6 +70,10 @@ public Configuration createConfiguration() {
 // use minimum multipart size for faster triggering
 conf.setLong(Constants.MULTIPART_SIZE, MULTIPART_MIN_SIZE);
 conf.setInt(Constants.S3A_BUCKET_PROBE, 1);
+// this is so stream draining is always blocking, allowing
+// assertions to be safely made without worrying
+// about any race conditions
+conf.setInt(ASYNC_DRAIN_THRESHOLD, 128_000);

Review Comment:
   Only after posting this has it clicked - we just want to make sure any 
assertions on the stream are completed after drain? Makes sense.
   
   `Integer.MAX_VALUE` might make it more explicit - I was wondering the 
significance of `128_000`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-15983) Remove the usage of jersey-json to remove jackson 1.x dependency.

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15983?focusedWorklogId=756380=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756380
 ]

ASF GitHub Bot logged work on HADOOP-15983:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 13:20
Start Date: 13/Apr/22 13:20
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on PR #3988:
URL: https://github.com/apache/hadoop/pull/3988#issuecomment-1098042616

   this should involve those sub projects who use jersey, to make sure they are 
all happy.
   
   if it was a random github artifact we'd be reluctant; the fact you are an 
asf member who could get code into our classpath anyway if you tried hard means 
this isn't an issue.
   
   would probably need an incompatible entry in the release notes for any maven 
project excluding/overriding the old one




Issue Time Tracking
---

Worklog Id: (was: 756380)
Time Spent: 3h 10m  (was: 3h)

> Remove the usage of jersey-json to remove jackson 1.x dependency.
> -
>
> Key: HADOOP-15983
> URL: https://issues.apache.org/jira/browse/HADOOP-15983
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3988: [HADOOP-15983] use jersey-json that is built to use jackson2

2022-04-13 Thread GitBox


steveloughran commented on PR #3988:
URL: https://github.com/apache/hadoop/pull/3988#issuecomment-1098042616

   this should involve those sub projects who use jersey, to make sure they are 
all happy.
   
   if it was a random github artifact we'd be reluctant; the fact you are an 
asf member who could get code into our classpath anyway if you tried hard means 
this isn't an issue.
   
   would probably need an incompatible entry in the release notes for any maven 
project excluding/overriding the old one


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16202) Enhance openFile() for better read performance against object stores

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=756378=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756378
 ]

ASF GitHub Bot logged work on HADOOP-16202:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 13:19
Start Date: 13/Apr/22 13:19
Worklog Time Spent: 10m 
  Work Description: dannycjones commented on code in PR #2584:
URL: https://github.com/apache/hadoop/pull/2584#discussion_r845064242


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FutureIOSupport.java:
##
@@ -136,56 +129,39 @@ private FutureIOSupport() {
* @param  type of builder
* @return the builder passed in.
*/
+  @Deprecated

Review Comment:
   Why deprecate this method when other methods promoted to `FutureIO` are 
happy without a deprecated flag?
   
   Should we encourage Hadoop developers to move to `FutureIO` once promoted 
from `FutureIOSupport`?



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/AbstractFSBuilderImpl.java:
##
@@ -187,10 +192,19 @@ public B opt(@Nonnull final String key, boolean value) {
   @Override
   public B opt(@Nonnull final String key, int value) {
 mandatoryKeys.remove(key);
+optionalKeys.add(key);
 options.setInt(key, value);
 return getThisBuilder();
   }
 
+  @Override
+  public B opt(@Nonnull final String key, final long value) {
+mandatoryKeys.remove(key);
+optionalKeys.add(key);
+options.setLong(key, value);
+return getThisBuilder();
+  }

Review Comment:
   JavaDoc?
   
   ```java
 /**
  * Set optional long parameter for the Builder.
  *
  * @see #opt(String, String)
  */
   ```



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FutureIOSupport.java:
##
@@ -53,6 +52,7 @@ private FutureIOSupport() {
   /**
* Given a future, evaluate it. Raised exceptions are
* extracted and handled.
+   * See {@link FutureIO#awaitFuture(Future, long, TimeUnit)}.

Review Comment:
   I think we want to reference the `awaitFuture` with only a future as arg?
   
   ```suggestion
  * See {@link FutureIO#awaitFuture(Future)}.
   ```



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java:
##
@@ -70,6 +70,10 @@ public Configuration createConfiguration() {
 // use minimum multipart size for faster triggering
 conf.setLong(Constants.MULTIPART_SIZE, MULTIPART_MIN_SIZE);
 conf.setInt(Constants.S3A_BUCKET_PROBE, 1);
+// this is so stream draining is always blocking, allowing
+// assertions to be safely made without worrying
+// about any race conditions
+conf.setInt(ASYNC_DRAIN_THRESHOLD, 128_000);

Review Comment:
   Hoping to better understand why the change is needed - what did the race 
conditions look like?



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java:
##
@@ -409,22 +439,27 @@ public static boolean copy(FileSystem srcFS, FileStatus 
srcStatus,
   if (!dstFS.mkdirs(dst)) {
 return false;
   }
-  FileStatus contents[] = srcFS.listStatus(src);
-  for (int i = 0; i < contents.length; i++) {
-copy(srcFS, contents[i], dstFS,
- new Path(dst, contents[i].getPath().getName()),
- deleteSource, overwrite, conf);
+  RemoteIterator contents = srcFS.listStatusIterator(src);
+  while (contents.hasNext()) {
+FileStatus next = contents.next();
+copy(srcFS, next, dstFS,
+new Path(dst, next.getPath().getName()),
+deleteSource, overwrite, conf);
   }
 } else {
-  InputStream in=null;
+  InputStream in = null;
   OutputStream out = null;
   try {
-in = srcFS.open(src);
+in = awaitFuture(srcFS.openFile(src)
+.opt(FS_OPTION_OPENFILE_READ_POLICY,
+FS_OPTION_OPENFILE_READ_POLICY_WHOLE_FILE)
+.opt(FS_OPTION_OPENFILE_LENGTH,
+srcStatus.getLen())   // file length hint for object stores

Review Comment:
   When should we use `FS_OPTION_OPENFILE_LENGTH` option vs. 
`.withFileStatus(status)`?



##
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstreambuilder.md:
##
@@ -110,13 +170,18 @@ custom subclasses.
 
 This is critical to ensure safe use of the feature: directory listing/
 status serialization/deserialization can result result in the 
`withFileStatus()`
-argumennt not being the custom subclass returned by the Filesystem instance's
+argument not being the custom subclass returned by the Filesystem instance's
 own `getFileStatus()`, `listFiles()`, `listLocatedStatus()` calls, etc.
 
 In such a situation the implementations must:
 
-1. Validate the path (always).
-1. Use the status/convert to the custom type, *or* 

[GitHub] [hadoop] dannycjones commented on a diff in pull request #2584: HADOOP-16202. Enhance openFile() for better read performance against object stores

2022-04-13 Thread GitBox


dannycjones commented on code in PR #2584:
URL: https://github.com/apache/hadoop/pull/2584#discussion_r845064242


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FutureIOSupport.java:
##
@@ -136,56 +129,39 @@ private FutureIOSupport() {
* @param  type of builder
* @return the builder passed in.
*/
+  @Deprecated

Review Comment:
   Why deprecate this method when other methods promoted to `FutureIO` are 
happy without a deprecated flag?
   
   Should we encourage Hadoop developers to move to `FutureIO` once promoted 
from `FutureIOSupport`?



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/AbstractFSBuilderImpl.java:
##
@@ -187,10 +192,19 @@ public B opt(@Nonnull final String key, boolean value) {
   @Override
   public B opt(@Nonnull final String key, int value) {
 mandatoryKeys.remove(key);
+optionalKeys.add(key);
 options.setInt(key, value);
 return getThisBuilder();
   }
 
+  @Override
+  public B opt(@Nonnull final String key, final long value) {
+mandatoryKeys.remove(key);
+optionalKeys.add(key);
+options.setLong(key, value);
+return getThisBuilder();
+  }

Review Comment:
   JavaDoc?
   
   ```java
 /**
  * Set optional long parameter for the Builder.
  *
  * @see #opt(String, String)
  */
   ```



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FutureIOSupport.java:
##
@@ -53,6 +52,7 @@ private FutureIOSupport() {
   /**
* Given a future, evaluate it. Raised exceptions are
* extracted and handled.
+   * See {@link FutureIO#awaitFuture(Future, long, TimeUnit)}.

Review Comment:
   I think we want to reference the `awaitFuture` with only a future as arg?
   
   ```suggestion
  * See {@link FutureIO#awaitFuture(Future)}.
   ```



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/AbstractS3AMockTest.java:
##
@@ -70,6 +70,10 @@ public Configuration createConfiguration() {
 // use minimum multipart size for faster triggering
 conf.setLong(Constants.MULTIPART_SIZE, MULTIPART_MIN_SIZE);
 conf.setInt(Constants.S3A_BUCKET_PROBE, 1);
+// this is so stream draining is always blocking, allowing
+// assertions to be safely made without worrying
+// about any race conditions
+conf.setInt(ASYNC_DRAIN_THRESHOLD, 128_000);

Review Comment:
   Hoping to better understand why the change is needed - what did the race 
conditions look like?



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java:
##
@@ -409,22 +439,27 @@ public static boolean copy(FileSystem srcFS, FileStatus 
srcStatus,
   if (!dstFS.mkdirs(dst)) {
 return false;
   }
-  FileStatus contents[] = srcFS.listStatus(src);
-  for (int i = 0; i < contents.length; i++) {
-copy(srcFS, contents[i], dstFS,
- new Path(dst, contents[i].getPath().getName()),
- deleteSource, overwrite, conf);
+  RemoteIterator contents = srcFS.listStatusIterator(src);
+  while (contents.hasNext()) {
+FileStatus next = contents.next();
+copy(srcFS, next, dstFS,
+new Path(dst, next.getPath().getName()),
+deleteSource, overwrite, conf);
   }
 } else {
-  InputStream in=null;
+  InputStream in = null;
   OutputStream out = null;
   try {
-in = srcFS.open(src);
+in = awaitFuture(srcFS.openFile(src)
+.opt(FS_OPTION_OPENFILE_READ_POLICY,
+FS_OPTION_OPENFILE_READ_POLICY_WHOLE_FILE)
+.opt(FS_OPTION_OPENFILE_LENGTH,
+srcStatus.getLen())   // file length hint for object stores

Review Comment:
   When should we use `FS_OPTION_OPENFILE_LENGTH` option vs. 
`.withFileStatus(status)`?



##
hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstreambuilder.md:
##
@@ -110,13 +170,18 @@ custom subclasses.
 
 This is critical to ensure safe use of the feature: directory listing/
 status serialization/deserialization can result result in the 
`withFileStatus()`
-argumennt not being the custom subclass returned by the Filesystem instance's
+argument not being the custom subclass returned by the Filesystem instance's
 own `getFileStatus()`, `listFiles()`, `listLocatedStatus()` calls, etc.
 
 In such a situation the implementations must:
 
-1. Validate the path (always).
-1. Use the status/convert to the custom type, *or* simply discard it.
+1. Verify that `status.getPath().getName()` matches the current 
`path.getName()`
+   value. The rest of the path MUST NOT be validated.
+1. Use any status fields as desired -for example the file length.
+
+Even if not values of the status are used, the presence of the argument

Review Comment:
   "none of the values"?
   
   ```suggestion
   Even if none of the values of the status are used, the presence 

[GitHub] [hadoop] steveloughran merged pull request #4166: HDFS-14478: Add libhdfs APIs for openFile

2022-04-13 Thread GitBox


steveloughran merged PR #4166:
URL: https://github.com/apache/hadoop/pull/4166


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #4166: HDFS-14478: Add libhdfs APIs for openFile

2022-04-13 Thread GitBox


steveloughran commented on PR #4166:
URL: https://github.com/apache/hadoop/pull/4166#issuecomment-1098036077

   The same tests fail on my arm64 docker vm too; so not related.
   
   +1 for sahil's patch.
   
   I'd like to followup with some tests of failure conditions, especially once 
#2584 is in, but not here


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4087: HDFS-16513. [SBN read] Observer Namenode should not trigger the edits rolling of active Namenode

2022-04-13 Thread GitBox


hadoop-yetus commented on PR #4087:
URL: https://github.com/apache/hadoop/pull/4087#issuecomment-1098019249

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 52s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 240m 42s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 360m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4087/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4087 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ec3b877e8bf4 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0b1946bd7012b025bdc7ef89e679edba15f68242 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4087/2/testReport/ |
   | Max. process+thread count | 3012 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4087/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about 

[GitHub] [hadoop] hadoop-yetus commented on pull request #4167: HDFS-16538. EC decoding failed due to not enough valid inputs

2022-04-13 Thread GitBox


hadoop-yetus commented on PR #4167:
URL: https://github.com/apache/hadoop/pull/4167#issuecomment-1098014110

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 23s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 101m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4167 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 6a5ad419e32a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0507620c617b7868361a484773d3f74f0a1dd8dc |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/2/testReport/ |
   | Max. process+thread count | 548 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

[GitHub] [hadoop] hadoop-yetus commented on pull request #4168: HDFS-16539. RBF: Support refreshing/changing router fairness policy controller without rebooting router

2022-04-13 Thread GitBox


hadoop-yetus commented on PR #4168:
URL: https://github.com/apache/hadoop/pull/4168#issuecomment-1098013964

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4168/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 2 new + 1 
unchanged - 0 fixed = 3 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | -1 :x: |  spotbugs  |   1m 35s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4168/1/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.html)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  24m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  22m  6s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 126m 20s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf |
   |  |  Inconsistent synchronization of 
org.apache.hadoop.hdfs.server.federation.router.RouterRpcClient.routerRpcFairnessPolicyController;
 locked 50% of time  Unsynchronized access at RouterRpcClient.java:50% of time  
Unsynchronized access at RouterRpcClient.java:[line 1611] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4168/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4168 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 243f27ee10f9 4.15.0-169-generic #177-Ubuntu SMP Thu Feb 3 
10:50:38 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bb35ec4b477abac7cefe03dcef2cde1327d5f43e |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[jira] [Work logged] (HADOOP-16202) Enhance openFile() for better read performance against object stores

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=756344=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756344
 ]

ASF GitHub Bot logged work on HADOOP-16202:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 12:01
Start Date: 13/Apr/22 12:01
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on PR #2584:
URL: https://github.com/apache/hadoop/pull/2584#issuecomment-1097968517

   @mehakmeet thanks, yes, sounds like it. file a JIRA 




Issue Time Tracking
---

Worklog Id: (was: 756344)
Time Spent: 19h 10m  (was: 19h)

> Enhance openFile() for better read performance against object stores 
> -
>
> Key: HADOOP-16202
> URL: https://issues.apache.org/jira/browse/HADOOP-16202
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/s3, tools/distcp
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 19h 10m
>  Remaining Estimate: 0h
>
> The {{openFile()}} builder API lets us add new options when reading a file
> Add an option {{"fs.s3a.open.option.length"}} which takes a long and allows 
> the length of the file to be declared. If set, *no check for the existence of 
> the file is issued when opening the file*
> Also: withFileStatus() to take any FileStatus implementation, rather than 
> only S3AFileStatus -and not check that the path matches the path being 
> opened. Needed to support viewFS-style wrapping and mounting.
> and Adopt where appropriate to stop clusters with S3A reads switched to 
> random IO from killing download/localization
> * fs shell copyToLocal
> * distcp
> * IOUtils.copy



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2584: HADOOP-16202. Enhance openFile() for better read performance against object stores

2022-04-13 Thread GitBox


steveloughran commented on PR #2584:
URL: https://github.com/apache/hadoop/pull/2584#issuecomment-1097968517

   @mehakmeet thanks, yes, sounds like it. file a JIRA 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16202) Enhance openFile() for better read performance against object stores

2022-04-13 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=756325=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-756325
 ]

ASF GitHub Bot logged work on HADOOP-16202:
---

Author: ASF GitHub Bot
Created on: 13/Apr/22 11:26
Start Date: 13/Apr/22 11:26
Worklog Time Spent: 10m 
  Work Description: mehakmeet commented on PR #2584:
URL: https://github.com/apache/hadoop/pull/2584#issuecomment-1097937077

   Ran the aws test suite on CSE. Everything ran fine, did see some region 
errors in `ITestS3ARequesterPays`
   
   ```
   [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 
11.433 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3ARequesterPays
   [ERROR] 
testRequesterPaysDisabledFails(org.apache.hadoop.fs.s3a.ITestS3ARequesterPays)  
Time elapsed: 9.323 s  <<< ERROR!
   org.apache.hadoop.fs.s3a.AWSRedirectException: getFileStatus on 
s3a://usgs-landsat/collection02/catalog.json: 
com.amazonaws.services.s3.model.AmazonS3Exception: The bucket is in this 
region: us-west-2. Please use this region to retry the request (Service: Amazon 
S3; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: 
Z09V8PMEEN5PHDRZ; S3 Extended Request ID: 
B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=; 
Proxy: null), S3 Extended Request ID: 
B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=:301
 Moved Permanently: The bucket is in this region: us-west-2. Please use this 
region to retry the request (Service: Amazon S3; Status Code: 301; Error Code: 
301 Moved Permanently; Request ID: Z09V8PMEEN5PHDRZ; S3 Extended Request ID: 
B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=; 
Proxy: null)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:233)
   ```
   Looking at the test, it seems like we should be removing base and bucket 
overrides for endpoint property too.
   




Issue Time Tracking
---

Worklog Id: (was: 756325)
Time Spent: 19h  (was: 18h 50m)

> Enhance openFile() for better read performance against object stores 
> -
>
> Key: HADOOP-16202
> URL: https://issues.apache.org/jira/browse/HADOOP-16202
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/s3, tools/distcp
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 19h
>  Remaining Estimate: 0h
>
> The {{openFile()}} builder API lets us add new options when reading a file
> Add an option {{"fs.s3a.open.option.length"}} which takes a long and allows 
> the length of the file to be declared. If set, *no check for the existence of 
> the file is issued when opening the file*
> Also: withFileStatus() to take any FileStatus implementation, rather than 
> only S3AFileStatus -and not check that the path matches the path being 
> opened. Needed to support viewFS-style wrapping and mounting.
> and Adopt where appropriate to stop clusters with S3A reads switched to 
> random IO from killing download/localization
> * fs shell copyToLocal
> * distcp
> * IOUtils.copy



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on pull request #2584: HADOOP-16202. Enhance openFile() for better read performance against object stores

2022-04-13 Thread GitBox


mehakmeet commented on PR #2584:
URL: https://github.com/apache/hadoop/pull/2584#issuecomment-1097937077

   Ran the aws test suite on CSE. Everything ran fine, did see some region 
errors in `ITestS3ARequesterPays`
   
   ```
   [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 
11.433 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3ARequesterPays
   [ERROR] 
testRequesterPaysDisabledFails(org.apache.hadoop.fs.s3a.ITestS3ARequesterPays)  
Time elapsed: 9.323 s  <<< ERROR!
   org.apache.hadoop.fs.s3a.AWSRedirectException: getFileStatus on 
s3a://usgs-landsat/collection02/catalog.json: 
com.amazonaws.services.s3.model.AmazonS3Exception: The bucket is in this 
region: us-west-2. Please use this region to retry the request (Service: Amazon 
S3; Status Code: 301; Error Code: 301 Moved Permanently; Request ID: 
Z09V8PMEEN5PHDRZ; S3 Extended Request ID: 
B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=; 
Proxy: null), S3 Extended Request ID: 
B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=:301
 Moved Permanently: The bucket is in this region: us-west-2. Please use this 
region to retry the request (Service: Amazon S3; Status Code: 301; Error Code: 
301 Moved Permanently; Request ID: Z09V8PMEEN5PHDRZ; S3 Extended Request ID: 
B7KDQntCuVmLJAyXvuY4UNXjdUrgn3xd26n8u7ThueNNxvKas6g3RsXo7oxBcvHrpcous2L+Lbk=; 
Proxy: null)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:233)
   ```
   Looking at the test, it seems like we should be removing base and bucket 
overrides for endpoint property too.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4167: HDFS-16538. EC decoding failed due to not enough valid inputs

2022-04-13 Thread GitBox


hadoop-yetus commented on PR #4167:
URL: https://github.com/apache/hadoop/pull/4167#issuecomment-1097910585

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 42s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   2m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 100m 29s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4167 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux f5521b8832b5 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8846746dd03b9b54a7db1d7d79f2835eb1c6adb6 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.14.1+1-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/1/testReport/ |
   | Max. process+thread count | 543 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: 
hadoop-hdfs-project/hadoop-hdfs-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4167/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

[GitHub] [hadoop] cndaimin commented on a diff in pull request #4088: HDFS-16514. Reduce the failover sleep time if multiple namenode are c…

2022-04-13 Thread GitBox


cndaimin commented on code in PR #4088:
URL: https://github.com/apache/hadoop/pull/4088#discussion_r849339424


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/retry/RetryPolicies.java:
##
@@ -639,19 +647,24 @@ public FailoverOnNetworkExceptionRetry(RetryPolicy 
fallbackPolicy,
 
 public FailoverOnNetworkExceptionRetry(RetryPolicy fallbackPolicy,
 int maxFailovers, int maxRetries, long delayMillis, long maxDelayBase) 
{
+  this(fallbackPolicy, maxFailovers, maxRetries, delayMillis, 
maxDelayBase, 2);
+}
+public FailoverOnNetworkExceptionRetry(RetryPolicy fallbackPolicy,
+int maxFailovers, int maxRetries, long delayMillis, long maxDelayBase, 
int nnSize) {
   this.fallbackPolicy = fallbackPolicy;
   this.maxFailovers = maxFailovers;
   this.maxRetries = maxRetries;
   this.delayMillis = delayMillis;
   this.maxDelayBase = maxDelayBase;
+  this.nnSize = nnSize;
 }
 
 /**
  * @return 0 if this is our first failover/retry (i.e., retry immediately),

Review Comment:
   The comments here looks need to be updated too.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] kokonguyen191 opened a new pull request, #4168: HDFS-16539. RBF: Support refreshing/changing router fairness policy controller without rebooting router

2022-04-13 Thread GitBox


kokonguyen191 opened a new pull request, #4168:
URL: https://github.com/apache/hadoop/pull/4168

   ### Description of PR
   Add support for refreshing/changing router fairness policy controller 
without the need to shutdown and boot a router.
   
   This patch makes use of the generic refresh feature on RouterAdmin. Usage: 
`hdfs dfsrouteradmin -refreshRouterArgs ROUTER_ADDR 
RefreshFairnessPolicyController`
   
   ### How was this patch tested?
   Unit test and local deployment.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhangxiping1 commented on a diff in pull request #4150: YARN-11107:When NodeLabel is enabled for a YARN cluster, AM blacklist…

2022-04-13 Thread GitBox


zhangxiping1 commented on code in PR #4150:
URL: https://github.com/apache/hadoop/pull/4150#discussion_r849311272


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/DefaultAMSProcessor.java:
##
@@ -351,7 +351,19 @@ public void allocate(ApplicationAttemptId appAttemptId,
 ((AbstractYarnScheduler)getScheduler())
 .getApplicationAttempt(appAttemptId).pullUpdateContainerErrors());
 
-response.setNumClusterNodes(getScheduler().getNumClusterNodes());
+String label="";
+try {
+  label = rmContext.getScheduler()
+  .getQueueInfo(app.getQueue(), false, false)
+  .getDefaultNodeLabelExpression();
+} catch (Exception e){
+}

Review Comment:
   @brumi1024  hi,Complete the test case !



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >