[GitHub] [hadoop] hadoop-yetus commented on pull request #2564: YARN-10538: Add RECOMMISSIONING nodes to the list of updated nodes returned to the AM

2021-01-05 Thread GitBox


hadoop-yetus commented on pull request #2564:
URL: https://github.com/apache/hadoop/pull/2564#issuecomment-755142839


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 47s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 44s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 48s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  89m 23s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2564/5/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 170m 35s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoQueueCreation
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2564/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2564 |
   | JIRA Issue | YARN-10538 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 9198d6aa1a30 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ae4945fb2c8 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2564/5/testReport/ |
   | Max. process+thread count | 887 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console output | 
https://ci-ha

[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=531732&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531732
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 07:42
Start Date: 06/Jan/21 07:42
Worklog Time Spent: 10m 
  Work Description: sumangala-patki closed pull request #2596:
URL: https://github.com/apache/hadoop/pull/2596


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531732)
Time Spent: 10.5h  (was: 10h 20m)

> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations ex: HNS account with SharedKey and OAuth, 
> NonHNS account with SharedKey etc..
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations automatically. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sumangala-patki closed pull request #2596: HADOOP-17191. ABFS: Run the tests with various combinations of configurations and publish a consolidated results

2021-01-05 Thread GitBox


sumangala-patki closed pull request #2596:
URL: https://github.com/apache/hadoop/pull/2596


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17191) ABFS: Run the integration tests with various combinations of configurations and publish a consolidated results

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17191?focusedWorklogId=531727&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531727
 ]

ASF GitHub Bot logged work on HADOOP-17191:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 07:25
Start Date: 06/Jan/21 07:25
Worklog Time Spent: 10m 
  Work Description: sumangala-patki opened a new pull request #2596:
URL: https://github.com/apache/hadoop/pull/2596


   - Contributed by Bilahari T H
   
   (cherry picked from commit 
[4c033ba](https://github.com/apache/hadoop/commit/4c033bafa02855722a901def4773a6a15b214318#diff-3750721475f40454079559d1809fd98fc156fcdeffb1fe1556e69786f1b5166f))



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531727)
Time Spent: 10h 20m  (was: 10h 10m)

> ABFS: Run the integration tests with various combinations of configurations 
> and publish a consolidated results
> --
>
> Key: HADOOP-17191
> URL: https://issues.apache.org/jira/browse/HADOOP-17191
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> ADLS Gen 2 supports accounts with and without hierarchical namespace support. 
> ABFS driver supports various authorization mechanisms like OAuth, haredKey, 
> Shared Access Signature. The integration tests need to be executed against 
> accounts with and without hierarchical namespace support using various 
> authorization mechanisms.
> Currently the developer has to manually run the tests with different 
> combinations of configurations ex: HNS account with SharedKey and OAuth, 
> NonHNS account with SharedKey etc..
> The expectation is to automate these runs with different combinations. This 
> will help the developer to run the integration tests with different variants 
> of configurations automatically. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sumangala-patki opened a new pull request #2596: HADOOP-17191. ABFS: Run the tests with various combinations of configurations and publish a consolidated results

2021-01-05 Thread GitBox


sumangala-patki opened a new pull request #2596:
URL: https://github.com/apache/hadoop/pull/2596


   - Contributed by Bilahari T H
   
   (cherry picked from commit 
[4c033ba](https://github.com/apache/hadoop/commit/4c033bafa02855722a901def4773a6a15b214318#diff-3750721475f40454079559d1809fd98fc156fcdeffb1fe1556e69786f1b5166f))



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17457) Seeing test ITestS3AInconsistency.testGetFileStatus failure.

2021-01-05 Thread Mukund Thakur (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukund Thakur updated HADOOP-17457:
---
Affects Version/s: 3.3.1

> Seeing test ITestS3AInconsistency.testGetFileStatus failure.
> 
>
> Key: HADOOP-17457
> URL: https://issues.apache.org/jira/browse/HADOOP-17457
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1
>Reporter: Mukund Thakur
>Priority: Major
>
> [*ERROR*] *Tests* *run: 3*, *Failures: 1*, Errors: 0, Skipped: 0, Time 
> elapsed: 30.944 s *<<< FAILURE!* - in 
> org.apache.hadoop.fs.s3a.*ITestS3AInconsistency*
> [*ERROR*] testGetFileStatus(org.apache.hadoop.fs.s3a.ITestS3AInconsistency)  
> Time elapsed: 6.471 s  <<< FAILURE!
> java.lang.AssertionError: getFileStatus should fail due to delayed visibility.
>  at org.junit.Assert.fail(Assert.java:88)
>  at 
> org.apache.hadoop.fs.s3a.ITestS3AInconsistency.testGetFileStatus(ITestS3AInconsistency.java:114)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17457) Seeing test ITestS3AInconsistency.testGetFileStatus failure.

2021-01-05 Thread Mukund Thakur (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukund Thakur updated HADOOP-17457:
---
Component/s: test
 fs/s3

> Seeing test ITestS3AInconsistency.testGetFileStatus failure.
> 
>
> Key: HADOOP-17457
> URL: https://issues.apache.org/jira/browse/HADOOP-17457
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.3.1
>Reporter: Mukund Thakur
>Priority: Major
>
> [*ERROR*] *Tests* *run: 3*, *Failures: 1*, Errors: 0, Skipped: 0, Time 
> elapsed: 30.944 s *<<< FAILURE!* - in 
> org.apache.hadoop.fs.s3a.*ITestS3AInconsistency*
> [*ERROR*] testGetFileStatus(org.apache.hadoop.fs.s3a.ITestS3AInconsistency)  
> Time elapsed: 6.471 s  <<< FAILURE!
> java.lang.AssertionError: getFileStatus should fail due to delayed visibility.
>  at org.junit.Assert.fail(Assert.java:88)
>  at 
> org.apache.hadoop.fs.s3a.ITestS3AInconsistency.testGetFileStatus(ITestS3AInconsistency.java:114)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>  at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>  at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17457) Seeing test ITestS3AInconsistency.testGetFileStatus failure.

2021-01-05 Thread Mukund Thakur (Jira)
Mukund Thakur created HADOOP-17457:
--

 Summary: Seeing test ITestS3AInconsistency.testGetFileStatus 
failure.
 Key: HADOOP-17457
 URL: https://issues.apache.org/jira/browse/HADOOP-17457
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mukund Thakur


{quote}[*ERROR*] *Tests* *run: 3*, *Failures: 1*, Errors: 0, Skipped: 0, Time 
elapsed: 30.944 s *<<< FAILURE!* - in 
org.apache.hadoop.fs.s3a.*ITestS3AInconsistency*

[*ERROR*] testGetFileStatus(org.apache.hadoop.fs.s3a.ITestS3AInconsistency)  
Time elapsed: 6.471 s  <<< FAILURE!

java.lang.AssertionError: getFileStatus should fail due to delayed visibility.

 at org.junit.Assert.fail(Assert.java:88)

 at 
org.apache.hadoop.fs.s3a.ITestS3AInconsistency.testGetFileStatus(ITestS3AInconsistency.java:114)

 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

 at java.lang.reflect.Method.invoke(Method.java:498)

 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)

 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)

 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)

 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)

 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)

 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)

 at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)

 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)

 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)

 at java.util.concurrent.FutureTask.run(FutureTask.java:266)

 at java.lang.Thread.run(Thread.java:748)
{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17457) Seeing test ITestS3AInconsistency.testGetFileStatus failure.

2021-01-05 Thread Mukund Thakur (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukund Thakur updated HADOOP-17457:
---
Description: 
[*ERROR*] *Tests* *run: 3*, *Failures: 1*, Errors: 0, Skipped: 0, Time elapsed: 
30.944 s *<<< FAILURE!* - in org.apache.hadoop.fs.s3a.*ITestS3AInconsistency*

[*ERROR*] testGetFileStatus(org.apache.hadoop.fs.s3a.ITestS3AInconsistency)  
Time elapsed: 6.471 s  <<< FAILURE!

java.lang.AssertionError: getFileStatus should fail due to delayed visibility.

 at org.junit.Assert.fail(Assert.java:88)

 at 
org.apache.hadoop.fs.s3a.ITestS3AInconsistency.testGetFileStatus(ITestS3AInconsistency.java:114)

 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

 at java.lang.reflect.Method.invoke(Method.java:498)

 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)

 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)

 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)

 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)

 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)

 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)

 at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)

 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)

 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)

 at java.util.concurrent.FutureTask.run(FutureTask.java:266)

 at java.lang.Thread.run(Thread.java:748)

  was:
{quote}[*ERROR*] *Tests* *run: 3*, *Failures: 1*, Errors: 0, Skipped: 0, Time 
elapsed: 30.944 s *<<< FAILURE!* - in 
org.apache.hadoop.fs.s3a.*ITestS3AInconsistency*

[*ERROR*] testGetFileStatus(org.apache.hadoop.fs.s3a.ITestS3AInconsistency)  
Time elapsed: 6.471 s  <<< FAILURE!

java.lang.AssertionError: getFileStatus should fail due to delayed visibility.

 at org.junit.Assert.fail(Assert.java:88)

 at 
org.apache.hadoop.fs.s3a.ITestS3AInconsistency.testGetFileStatus(ITestS3AInconsistency.java:114)

 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

 at java.lang.reflect.Method.invoke(Method.java:498)

 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)

 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)

 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)

 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)

 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)

 at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)

 at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)

 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)

 at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)

 at java.util.concurrent.FutureTask.run(FutureTask.java:266)

 at java.lang.Thread.run(Thread.java:748)
{quote}


> Seeing test ITestS3AInconsistency.testGetFileStatus failure.
> 
>
> Key: HADOOP-17457
> URL: https://issues.apache.org/jira/browse/HADOOP-17457
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mukund Thakur
>Priority: Major
>
> [*ERROR*] *Tests* *run: 3*, *Failures: 1*, Errors: 0, Skipped: 0, Time 
> elapsed: 30.944 s *<<< FAILURE!* - in 
> org.apache.hadoop.fs.s3a.*ITestS3AInconsistency*
> [*ERROR*] testGetFileStatus(org.apache.hadoop.fs.s3a.ITestS3AInconsistency)  
> Time elapsed: 6.471 s  <<< FAILURE!
> java.lang.AssertionError: getFileStatus should fail due to delayed visibility.
>  at org.junit.Assert.fail(Assert.java:88)
>  at 
> org.apache.hadoop.fs.s3a.ITestS3AInconsistency.testGetFileStatus(ITestS3AInconsistency.java:114)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(Reflectiv

[GitHub] [hadoop] nilotpalnandi opened a new pull request #2595: HDFS-15619 Add snapshot related metrics

2021-01-05 Thread GitBox


nilotpalnandi opened a new pull request #2595:
URL: https://github.com/apache/hadoop/pull/2595


   https://issues.apache.org/jira/browse/HDFS-15619
   
   Metrics for -
   1. Number of snapshot GC runs
   2. Number of empty snapshot GC runs
   3. Number of successful snapshot GC runs
   4. Number of total snapshot delete operations
   5. Number of out-of-order snapshot delete operations
   6. Number of in-order snapshot delete operations
   7. Number of ACTIVE  snapshots
   8. Number of DELETED snapshots
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?focusedWorklogId=531717&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531717
 ]

ASF GitHub Bot logged work on HADOOP-17454:
---

Author: ASF GitHub Bot
Created on: 06/Jan/21 06:39
Start Date: 06/Jan/21 06:39
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593#issuecomment-755115007


   We changed the default behaviour of bucket probe from 2 to 0 which was 
present from long time. Won't this cause problem for the downstream components 
which used to expect this exception ?
   
   Good that we have added the release notes. Thanks Gabor.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531717)
Time Spent: 1.5h  (was: 1h 20m)

> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Set the value of fs.s3a.bucket.probe to 0 by default.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #2593: HADOOP-17454. [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread GitBox


mukund-thakur commented on pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593#issuecomment-755115007


   We changed the default behaviour of bucket probe from 2 to 0 which was 
present from long time. Won't this cause problem for the downstream components 
which used to expect this exception ?
   
   Good that we have added the release notes. Thanks Gabor.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2592: YARN-10560. Upgrade node.js to 10.23.1 and yarn to 1.22.5 in Web UI v2.

2021-01-05 Thread GitBox


aajisaka commented on pull request #2592:
URL: https://github.com/apache/hadoop/pull/2592#issuecomment-755048566


   Thank you @iwasakims 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka merged pull request #2592: YARN-10560. Upgrade node.js to 10.23.1 and yarn to 1.22.5 in Web UI v2.

2021-01-05 Thread GitBox


aajisaka merged pull request #2592:
URL: https://github.com/apache/hadoop/pull/2592


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2581: YARN-10553. Refactor TestDistributedShell

2021-01-05 Thread GitBox


hadoop-yetus commented on pull request #2581:
URL: https://github.com/apache/hadoop/pull/2581#issuecomment-755045715


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  29m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  17m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 47s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 37s |  |  
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file (findbugsXml.xml)  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |  20m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 42s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 42s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 36s |  |  root: The patch generated 
0 new + 164 unchanged - 18 fixed = 164 total (was 182)  |
   | +1 :green_heart: |  mvnsite  |   1m 49s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2581/2/artifact/out/whitespace-eol.txt)
 |  The patch has 1 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  shadedclient  |  15m 38s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  findbugs  |   0m 34s |  |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests has 
no data from findbugs  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 19s |  |  hadoop-yarn-server-tests in the 
patch passed.  |
   | -1 :x: |  unit  |  31m 46s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2581/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt)
 |  hadoop-yarn-applications-distributedshell in the patch passed.  |
   | -1 :x: |  unit  |   0m 37s | 
[/patch-unit-hadoop-tools_hadoop-dynamometer_hadoop-dynamometer-infra.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2581/2/artifact/out/patch-unit-hadoop-tools_hadoop-dynamometer_hadoop-dynamometer-infra.txt)
 |  hadoop-dynamometer-infra in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 235m 40s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.applications.distributedshell.TestDSShellTimelineV10 |
   |   | hadoop.yarn.applications.distributedshell.TestDSShellTimelineV15 |
   |   | hadoop.yarn.applications.distributedshell.TestDSShellTimelineV20 |
   |   | 
hadoop.yarn.applications.distributedshell.TestDSWithMultipleNodeManager |
   |   | hadoop.tools.dynamometer.TestDynamometerInfra |
   
   
   | Subsyst

[jira] [Work logged] (HADOOP-17451) intermittent failure of S3A huge file upload tests: count of bytes uploaded == 0

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17451?focusedWorklogId=531582&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531582
 ]

ASF GitHub Bot logged work on HADOOP-17451:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 23:43
Start Date: 05/Jan/21 23:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2594:
URL: https://github.com/apache/hadoop/pull/2594#issuecomment-754968448


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  30m 24s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 51s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  20m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 42s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |  31m 48s |  |  branch has errors when building 
and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   2m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 28s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 18s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 50s |  |  the patch passed  |
   | -1 :x: |  compile  |   6m 40s | 
[/patch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.  
|
   | -1 :x: |  javac  |   6m 40s | 
[/patch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.  
|
   | -1 :x: |  compile  |   0m 24s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.  |
   | -1 :x: |  javac  |   0m 24s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 24s | 
[/buildtool-patch-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/buildtool-patch-checkstyle-root.txt)
 |  The patch fails to run checkstyle in root  |
   | -1 :x: |  mvnsite  |   0m 24s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvnsite  |   0m 24s | 
[/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |   0m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 27s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.o

[GitHub] [hadoop] hadoop-yetus commented on pull request #2594: HADOOP-17451. IOStatistics test failures in S3A code.

2021-01-05 Thread GitBox


hadoop-yetus commented on pull request #2594:
URL: https://github.com/apache/hadoop/pull/2594#issuecomment-754968448


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  30m 24s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 51s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  20m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 42s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |  31m 48s |  |  branch has errors when building 
and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   2m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 28s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 18s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 50s |  |  the patch passed  |
   | -1 :x: |  compile  |   6m 40s | 
[/patch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.  
|
   | -1 :x: |  javac  |   6m 40s | 
[/patch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.  
|
   | -1 :x: |  compile  |   0m 24s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.  |
   | -1 :x: |  javac  |   0m 24s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.  |
   | -0 :warning: |  checkstyle  |   0m 24s | 
[/buildtool-patch-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/buildtool-patch-checkstyle-root.txt)
 |  The patch fails to run checkstyle in root  |
   | -1 :x: |  mvnsite  |   0m 24s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  mvnsite  |   0m 24s | 
[/patch-mvnsite-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/patch-mvnsite-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |   0m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 27s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/1/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.  |
   | -1 :x: |  javadoc  |   0m 27s | 
[/patch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2594/

[jira] [Assigned] (HADOOP-17430) Restore ability to set Text to empty byte array

2021-01-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-17430:
---

Assignee: gaozhan ding

>  Restore ability to set Text to empty byte array
> 
>
> Key: HADOOP-17430
> URL: https://issues.apache.org/jira/browse/HADOOP-17430
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: common
>Reporter: gaozhan ding
>Assignee: gaozhan ding
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> In org.apache.hadoop.io.Text:clear() method, the comments show that we can 
> free the bytes by call set(new byte[0]), but it's not going to work now. 
> Maybe we can follow this comments.
>  
>  
> {code:java}
> // org.apache.hadoop.io.Text 
> /**
>  * Clear the string to empty.
>  *
>  * Note: For performance reasons, this call does not clear the
>  * underlying byte array that is retrievable via {@link #getBytes()}.
>  * In order to free the byte-array memory, call {@link #set(byte[])}
>  * with an empty byte array (For example, new byte[0]).
>  */
> public void clear() {
>   length = 0;
>   textLength = -1;
> }
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17430) Restore ability to set Text to empty byte array

2021-01-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17430.
-
Fix Version/s: 3.3.1
   Resolution: Fixed

>  Restore ability to set Text to empty byte array
> 
>
> Key: HADOOP-17430
> URL: https://issues.apache.org/jira/browse/HADOOP-17430
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: common
>Reporter: gaozhan ding
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> In org.apache.hadoop.io.Text:clear() method, the comments show that we can 
> free the bytes by call set(new byte[0]), but it's not going to work now. 
> Maybe we can follow this comments.
>  
>  
> {code:java}
> // org.apache.hadoop.io.Text 
> /**
>  * Clear the string to empty.
>  *
>  * Note: For performance reasons, this call does not clear the
>  * underlying byte array that is retrievable via {@link #getBytes()}.
>  * In order to free the byte-array memory, call {@link #set(byte[])}
>  * with an empty byte array (For example, new byte[0]).
>  */
> public void clear() {
>   length = 0;
>   textLength = -1;
> }
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17430) Restore ability to set Text to empty byte array

2021-01-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17430:

Summary:  Restore ability to set Text to empty byte array  (was: There is 
no way to clear Text bytes now)

>  Restore ability to set Text to empty byte array
> 
>
> Key: HADOOP-17430
> URL: https://issues.apache.org/jira/browse/HADOOP-17430
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: common
>Reporter: gaozhan ding
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> In org.apache.hadoop.io.Text:clear() method, the comments show that we can 
> free the bytes by call set(new byte[0]), but it's not going to work now. 
> Maybe we can follow this comments.
>  
>  
> {code:java}
> // org.apache.hadoop.io.Text 
> /**
>  * Clear the string to empty.
>  *
>  * Note: For performance reasons, this call does not clear the
>  * underlying byte array that is retrievable via {@link #getBytes()}.
>  * In order to free the byte-array memory, call {@link #set(byte[])}
>  * with an empty byte array (For example, new byte[0]).
>  */
> public void clear() {
>   length = 0;
>   textLength = -1;
> }
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17430) There is no way to clear Text bytes now

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17430?focusedWorklogId=531494&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531494
 ]

ASF GitHub Bot logged work on HADOOP-17430:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 21:09
Start Date: 05/Jan/21 21:09
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #2545:
URL: https://github.com/apache/hadoop/pull/2545


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531494)
Time Spent: 3.5h  (was: 3h 20m)

> There is no way to clear Text bytes now
> ---
>
> Key: HADOOP-17430
> URL: https://issues.apache.org/jira/browse/HADOOP-17430
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: common
>Reporter: gaozhan ding
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> In org.apache.hadoop.io.Text:clear() method, the comments show that we can 
> free the bytes by call set(new byte[0]), but it's not going to work now. 
> Maybe we can follow this comments.
>  
>  
> {code:java}
> // org.apache.hadoop.io.Text 
> /**
>  * Clear the string to empty.
>  *
>  * Note: For performance reasons, this call does not clear the
>  * underlying byte array that is retrievable via {@link #getBytes()}.
>  * In order to free the byte-array memory, call {@link #set(byte[])}
>  * with an empty byte array (For example, new byte[0]).
>  */
> public void clear() {
>   length = 0;
>   textLength = -1;
> }
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #2545: HADOOP-17430. Add clear bytes logic for hadoop Text

2021-01-05 Thread GitBox


steveloughran merged pull request #2545:
URL: https://github.com/apache/hadoop/pull/2545


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17430) There is no way to clear Text bytes now

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17430?focusedWorklogId=531491&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531491
 ]

ASF GitHub Bot logged work on HADOOP-17430:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 21:07
Start Date: 05/Jan/21 21:07
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2545:
URL: https://github.com/apache/hadoop/pull/2545#issuecomment-754899716


   Javadocs look good, and test failures are the OOM problems we've started 
having.
   
   +1, merging to trunk and backporting to 3.3. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531491)
Time Spent: 3h 20m  (was: 3h 10m)

> There is no way to clear Text bytes now
> ---
>
> Key: HADOOP-17430
> URL: https://issues.apache.org/jira/browse/HADOOP-17430
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: common
>Reporter: gaozhan ding
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> In org.apache.hadoop.io.Text:clear() method, the comments show that we can 
> free the bytes by call set(new byte[0]), but it's not going to work now. 
> Maybe we can follow this comments.
>  
>  
> {code:java}
> // org.apache.hadoop.io.Text 
> /**
>  * Clear the string to empty.
>  *
>  * Note: For performance reasons, this call does not clear the
>  * underlying byte array that is retrievable via {@link #getBytes()}.
>  * In order to free the byte-array memory, call {@link #set(byte[])}
>  * with an empty byte array (For example, new byte[0]).
>  */
> public void clear() {
>   length = 0;
>   textLength = -1;
> }
> {code}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2545: HADOOP-17430. Add clear bytes logic for hadoop Text

2021-01-05 Thread GitBox


steveloughran commented on pull request #2545:
URL: https://github.com/apache/hadoop/pull/2545#issuecomment-754899716


   Javadocs look good, and test failures are the OOM problems we've started 
having.
   
   +1, merging to trunk and backporting to 3.3. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17451) intermittent failure of S3A huge file upload tests: count of bytes uploaded == 0

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17451?focusedWorklogId=531486&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531486
 ]

ASF GitHub Bot logged work on HADOOP-17451:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 20:56
Start Date: 05/Jan/21 20:56
Worklog Time Spent: 10m 
  Work Description: steveloughran opened a new pull request #2594:
URL: https://github.com/apache/hadoop/pull/2594


   
   Fixing tests which fail intermittently based on configs and
   in the case of the HugeFile tests, only in bulk runs when existing
   FS instances meant statistic probes sometimes ended up probing those
   of a previous FS.
   
   Fixes:
   
   * HADOOP-17451. HugeFile upload tests
   * HADOOP-17456. ITestPartialRenamesDeletes.testPartialDirDelete failure
   
   Does not fix:
   
   * HADOOP-17455. ITestS3ADeleteCost failure
   
   -
   
   Testing: ongoing. These test failures are a bit intermittent which 
complicates life (it's why they crept in)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531486)
Remaining Estimate: 0h
Time Spent: 10m

> intermittent failure of S3A huge file upload tests: count of bytes uploaded 
> == 0
> 
>
> Key: HADOOP-17451
> URL: https://issues.apache.org/jira/browse/HADOOP-17451
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Intermittent failure of ITestHuge* upload tests, when doing parallel test 
> runs.
> The count of bytes uploaded through StorageStatistics isn't updated. Maybe 
> the expected counter isn't updated, and somehow in a parallel run with 
> recycled FS instances/set up directory structure this surfaces the way it 
> doesn't in a single test run.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17451) intermittent failure of S3A huge file upload tests: count of bytes uploaded == 0

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17451:

Labels: pull-request-available  (was: )

> intermittent failure of S3A huge file upload tests: count of bytes uploaded 
> == 0
> 
>
> Key: HADOOP-17451
> URL: https://issues.apache.org/jira/browse/HADOOP-17451
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Intermittent failure of ITestHuge* upload tests, when doing parallel test 
> runs.
> The count of bytes uploaded through StorageStatistics isn't updated. Maybe 
> the expected counter isn't updated, and somehow in a parallel run with 
> recycled FS instances/set up directory structure this surfaces the way it 
> doesn't in a single test run.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #2594: HADOOP-17451. IOStatistics test failures in S3A code.

2021-01-05 Thread GitBox


steveloughran opened a new pull request #2594:
URL: https://github.com/apache/hadoop/pull/2594


   
   Fixing tests which fail intermittently based on configs and
   in the case of the HugeFile tests, only in bulk runs when existing
   FS instances meant statistic probes sometimes ended up probing those
   of a previous FS.
   
   Fixes:
   
   * HADOOP-17451. HugeFile upload tests
   * HADOOP-17456. ITestPartialRenamesDeletes.testPartialDirDelete failure
   
   Does not fix:
   
   * HADOOP-17455. ITestS3ADeleteCost failure
   
   -
   
   Testing: ongoing. These test failures are a bit intermittent which 
complicates life (it's why they crept in)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17408) Optimize NetworkTopology while sorting of block locations

2021-01-05 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17259194#comment-17259194
 ] 

Jim Brennan commented on HADOOP-17408:
--

[~ahussein] thanks for the PR!  Can you please separate this into two parts?

I would like to see a separate Jira/PR with just the changes that [~daryn] made 
internally - those changes have gotten some run-time and are a clear 
optimization.

The additional changes you have made are mostly a refactoring, and I am not 
convinced the original behavior has been retained.   Optimizing away the 
shuffle could have been achieved by just moving the shuffle into the else case:
{noformat}
if (secondarySort != null) {
  secondarySort.accept(list);
} else {
  Collections.shuffle(list, r); 
}
{noformat}
The other concern with the refactoring portion is that it changes the signature 
of public method sortByDistance().


> Optimize NetworkTopology while sorting of block locations
> -
>
> Key: HADOOP-17408
> URL: https://issues.apache.org/jira/browse/HADOOP-17408
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, net
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In {{NetworkTopology}}, I noticed that there are some hanging fruits to 
> improve the performance.
> Inside {{sortByDistance}}, collections.shuffle is performed on the list 
> before calling {{secondarySort}}.
> {code:java}
> Collections.shuffle(list, r);
> if (secondarySort != null) {
>   secondarySort.accept(list);
> }
> {code}
> However, in different call sites, {{collections.shuffle}} is passed as the 
> secondarySort to {{sortByDistance}}. This means that the shuffle is executed 
> twice on each list.
> Also, logic wise, it is useless to shuffle before applying a tie breaker 
> which might make the shuffle work obsolete.
> In addition, [~daryn] reported that:
> * topology is unnecessarily locking/unlocking to calculate the distance for 
> every node
> * shuffling uses a seeded Random, instead of ThreadLocalRandom, which is 
> heavily synchronized



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17455) [s3a] Intermittent failure of ITestS3ADeleteCost.testDeleteSingleFileInDir

2021-01-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17455:

Parent Issue: HADOOP-16830  (was: HADOOP-16829)

> [s3a] Intermittent failure of ITestS3ADeleteCost.testDeleteSingleFileInDir
> --
>
> Key: HADOOP-17455
> URL: https://issues.apache.org/jira/browse/HADOOP-17455
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Steve Loughran
>Priority: Major
>
> Test failed against ireland intermittently with the following config:
> {{mvn clean verify -Dparallel-tests -DtestsThreadCount=8}}
> xml based config in auth-keys.xml:
> {code:xml}
> 
> fs.s3a.metadatastore.impl
> org.apache.hadoop.fs.s3a.s3guard.NullMetadataStore
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17455) [s3a] Intermittent failure of ITestS3ADeleteCost.testDeleteSingleFileInDir

2021-01-05 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17259198#comment-17259198
 ] 

Steve Loughran commented on HADOOP-17455:
-

Happening to me in a test run (not IDE tho'), where the relevant counters are 
{{object_delete_objects starting=6 current=9 diff=3, object_delete_request 
starting=0 current=1 diff=1}}; maven settings {{-Dparallel-tests 
-DtestsThreadCount=6 -Dscale -Dmarkers=delete -Ds3guard -Ddynamo}}

Test configs here are 
[raw-delete-markers], [nonauth-delete-markers][auth-delete-markers], which is 
confusing as in IDE runs, the nonauth and auth guarded runs are skipped because 
no tests were seen. I think that's a bug in the verifyRaw code which could be 
significant. I've been writing tests expecting a verifyRaw(cost, 
()->something()) to execute something but only verify the costs on raw. Looks 
like instead: we've been skipping the test suite when !raw. Fixing that may 
throw suprises


{code}
[ERROR] 
testDeleteSingleFileInDir[raw-delete-markers](org.apache.hadoop.fs.s3a.performance.ITestS3ADeleteCost)
  Time elapsed: 5.902 s  <<< FAILURE!
java.lang.AssertionError: operation returning after fs.delete(simpleFile) 
action_executor_acquired starting=0 current=0 diff=0, action_http_get_request 
starting=0 current=0 diff=0, action_http_head_request starting=4 current=5 
diff=1, committer_bytes_committed starting=0 current=0 diff=0, 
committer_bytes_uploaded starting=0 current=0 diff=0, committer_commit_job 
starting=0 current=0 diff=0, committer_commits.failures starting=0 current=0 
diff=0, committer_commits_aborted starting=0 current=0 diff=0, 
committer_commits_completed starting=0 current=0 diff=0, 
committer_commits_created starting=0 current=0 diff=0, 
committer_commits_reverted starting=0 current=0 diff=0, 
committer_jobs_completed starting=0 current=0 diff=0, committer_jobs_failed 
starting=0 current=0 diff=0, committer_magic_files_created starting=0 current=0 
diff=0, committer_materialize_file starting=0 current=0 diff=0, 
committer_stage_file_upload starting=0 current=0 diff=0, 
committer_tasks_completed starting=0 current=0 diff=0, committer_tasks_failed 
starting=0 current=0 diff=0, delegation_token_issued starting=0 current=0 
diff=0, directories_created starting=2 current=3 diff=1, directories_deleted 
starting=0 current=0 diff=0, fake_directories_created starting=0 current=0 
diff=0, fake_directories_deleted starting=6 current=8 diff=2, files_copied 
starting=0 current=0 diff=0, files_copied_bytes starting=0 current=0 diff=0, 
files_created starting=1 current=1 diff=0, files_delete_rejected starting=0 
current=0 diff=0, files_deleted starting=0 current=1 diff=1, ignored_errors 
starting=0 current=0 diff=0, multipart_instantiated starting=0 current=0 
diff=0, multipart_upload_abort_under_path_invoked starting=0 current=0 diff=0, 
multipart_upload_aborted starting=0 current=0 diff=0, 
multipart_upload_completed starting=0 current=0 diff=0, 
multipart_upload_part_put starting=0 current=0 diff=0, 
multipart_upload_part_put_bytes starting=0 current=0 diff=0, 
multipart_upload_started starting=0 current=0 diff=0, 
object_bulk_delete_request starting=3 current=4 diff=1, 
object_continue_list_request starting=0 current=0 diff=0, object_copy_requests 
starting=0 current=0 diff=0, object_delete_objects starting=6 current=9 diff=3, 
object_delete_request starting=0 current=1 diff=1, object_list_request 
starting=5 current=6 diff=1, object_metadata_request starting=4 current=5 
diff=1, object_multipart_aborted starting=0 current=0 diff=0, 
object_multipart_initiated starting=0 current=0 diff=0, object_put_bytes 
starting=0 current=0 diff=0, object_put_request starting=3 current=4 diff=1, 
object_put_request_completed starting=3 current=4 diff=1, 
object_select_requests starting=0 current=0 diff=0, op_copy_from_local_file 
starting=0 current=0 diff=0, op_create starting=1 current=1 diff=0, 
op_create_non_recursive starting=0 current=0 diff=0, op_delete starting=0 
current=1 diff=1, op_exists starting=0 current=0 diff=0, 
op_get_delegation_token starting=0 current=0 diff=0, op_get_file_checksum 
starting=0 current=0 diff=0, op_get_file_status starting=2 current=2 diff=0, 
op_glob_status starting=0 current=0 diff=0, op_is_directory starting=0 
current=0 diff=0, op_is_file starting=0 current=0 diff=0, op_list_files 
starting=0 current=0 diff=0, op_list_located_status starting=0 current=0 
diff=0, op_list_status starting=0 current=0 diff=0, op_mkdirs starting=2 
current=2 diff=0, op_open starting=0 current=0 diff=0, op_rename starting=0 
current=0 diff=0, s3guard_metadatastore_authoritative_directories_updated 
starting=0 current=0 diff=0, s3guard_metadatastore_initialization starting=0 
current=0 diff=0, s3guard_metadatastore_put_path_request starting=0 current=0 
diff=0, s3guard_metadatastore_record_deletes starting=0 current=0 diff=0, 
s3guard_metadatastore_

[jira] [Work logged] (HADOOP-17404) ABFS: Piggyback flush on Append calls for short writes

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17404?focusedWorklogId=531468&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531468
 ]

ASF GitHub Bot logged work on HADOOP-17404:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 20:22
Start Date: 05/Jan/21 20:22
Worklog Time Spent: 10m 
  Work Description: DadanielZ commented on a change in pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509#discussion_r552173130



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
##
@@ -55,6 +55,7 @@
   public static final String AZURE_WRITE_MAX_CONCURRENT_REQUESTS = 
"fs.azure.write.max.concurrent.requests";
   public static final String AZURE_WRITE_MAX_REQUESTS_TO_QUEUE = 
"fs.azure.write.max.requests.to.queue";
   public static final String AZURE_WRITE_BUFFER_SIZE = 
"fs.azure.write.request.size";
+  public static final String AZURE_ENABLE_SMALL_WRITE_OPTIMIZATION = 
"fs.azure.write.enableappendwithflush";

Review comment:
   for newly added config key, a little comment would be very helpful





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531468)
Time Spent: 3h 10m  (was: 3h)

> ABFS: Piggyback flush on Append calls for short writes
> --
>
> Key: HADOOP-17404
> URL: https://issues.apache.org/jira/browse/HADOOP-17404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> When Hflush or Hsync APIs are called, a call is made to store backend to 
> commit the data that was appended. 
> If the data size written by Hadoop app is small, i.e. data size :
>  * before any of HFlush/HSync call is made or
>  * between 2 HFlush/Hsync API calls
> is less than write buffer size, 2 separate calls, one for append and another 
> for flush is made,
> Apps that do such small writes eventually end up with almost similar number 
> of calls for flush and append.
> This PR enables Flush to be piggybacked onto append call for such short write 
> scenarios.
>  
> NOTE: The changes is guarded over a config, and is disabled by default until 
> relevant supported changes is made available on all store production clusters.
> New Config added: fs.azure.write.enableappendwithflush



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] DadanielZ commented on a change in pull request #2509: HADOOP-17404. ABFS: Small write - Merge append and flush

2021-01-05 Thread GitBox


DadanielZ commented on a change in pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509#discussion_r552173130



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
##
@@ -55,6 +55,7 @@
   public static final String AZURE_WRITE_MAX_CONCURRENT_REQUESTS = 
"fs.azure.write.max.concurrent.requests";
   public static final String AZURE_WRITE_MAX_REQUESTS_TO_QUEUE = 
"fs.azure.write.max.requests.to.queue";
   public static final String AZURE_WRITE_BUFFER_SIZE = 
"fs.azure.write.request.size";
+  public static final String AZURE_ENABLE_SMALL_WRITE_OPTIMIZATION = 
"fs.azure.write.enableappendwithflush";

Review comment:
   for newly added config key, a little comment would be very helpful





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17451) intermittent failure of S3A huge file upload tests: count of bytes uploaded == 0

2021-01-05 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17259121#comment-17259121
 ] 

Steve Loughran commented on HADOOP-17451:
-

I see the problem. The StorageStatistics stuff is actually shared across 
instances via global references...if an existing store is used then its stats 
are retrieved.

Switching stats assertions to IOStatistics on the instance

> intermittent failure of S3A huge file upload tests: count of bytes uploaded 
> == 0
> 
>
> Key: HADOOP-17451
> URL: https://issues.apache.org/jira/browse/HADOOP-17451
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Intermittent failure of ITestHuge* upload tests, when doing parallel test 
> runs.
> The count of bytes uploaded through StorageStatistics isn't updated. Maybe 
> the expected counter isn't updated, and somehow in a parallel run with 
> recycled FS instances/set up directory structure this surfaces the way it 
> doesn't in a single test run.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17430) There is no way to clear Text bytes now

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17430?focusedWorklogId=531383&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531383
 ]

ASF GitHub Bot logged work on HADOOP-17430:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 17:53
Start Date: 05/Jan/21 17:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2545:
URL: https://github.com/apache/hadoop/pull/2545#issuecomment-754795541


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  17m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 22s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 20s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |  19m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 10s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 43s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  10m 38s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2545/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 171m 42s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.metrics2.source.TestJvmMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2545/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2545 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d00d930f34f5 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 42eb9ff68e3 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2545/1/testReport/ |
   | Max. process+thread count | 1382 (vs. ulimit of 5500) |
   | mod

[GitHub] [hadoop] hadoop-yetus commented on pull request #2545: HADOOP-17430. Add clear bytes logic for hadoop Text

2021-01-05 Thread GitBox


hadoop-yetus commented on pull request #2545:
URL: https://github.com/apache/hadoop/pull/2545#issuecomment-754795541


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  17m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 22s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 20s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |  19m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 10s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 43s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  10m 38s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2545/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 171m 42s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.metrics2.source.TestJvmMetrics |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2545/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2545 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d00d930f34f5 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 42eb9ff68e3 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2545/1/testReport/ |
   | Max. process+thread count | 1382 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2545/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-

[jira] [Work logged] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16080?focusedWorklogId=531363&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531363
 ]

ASF GitHub Bot logged work on HADOOP-16080:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 17:21
Start Date: 05/Jan/21 17:21
Worklog Time Spent: 10m 
  Work Description: sunchao commented on pull request #2575:
URL: https://github.com/apache/hadoop/pull/2575#issuecomment-754776995


   @steveloughran eh I only tested this in Spark (verified that the failure 
[here](https://github.com/apache/spark/pull/29843#issuecomment-733932857) was 
fixed while was reproducible w/o the PR) using a S3A end point of my own. I can 
run the integration tests also - are the steps 
[here](https://hadoop.apache.org/docs/r2.9.2/hadoop-aws/tools/hadoop-aws/testing.html)?
 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531363)
Time Spent: 6h  (was: 5h 50m)

> hadoop-aws does not work with hadoop-client-api
> ---
>
> Key: HADOOP-16080
> URL: https://issues.apache.org/jira/browse/HADOOP-16080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Keith Turner
>Assignee: Chao Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2, 3.3.1
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> I attempted to use Accumulo and S3a with the following jars on the classpath.
>  * hadoop-client-api-3.1.1.jar
>  * hadoop-client-runtime-3.1.1.jar
>  * hadoop-aws-3.1.1.jar
> This failed with the following exception.
> {noformat}
> Exception in thread "init" java.lang.NoSuchMethodError: 
> org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
> at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108)
> at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413)
> at 
> org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184)
> at 
> org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479)
> at 
> org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487)
> at 
> org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370)
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348)
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967)
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The problem is that {{S3AFileSystem.create()}} looks for 
> {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}}
>  which does not exist in hadoop-client-api-3.1.1.jar.  What does exist is 
> {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}.
> To work around this issue I created a version of hadoop-aws-3.1.1.jar that 
> relocated references to Guava.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on pull request #2575: HADOOP-16080. hadoop-aws does not work with hadoop-client-api

2021-01-05 Thread GitBox


sunchao commented on pull request #2575:
URL: https://github.com/apache/hadoop/pull/2575#issuecomment-754776995


   @steveloughran eh I only tested this in Spark (verified that the failure 
[here](https://github.com/apache/spark/pull/29843#issuecomment-733932857) was 
fixed while was reproducible w/o the PR) using a S3A end point of my own. I can 
run the integration tests also - are the steps 
[here](https://hadoop.apache.org/docs/r2.9.2/hadoop-aws/tools/hadoop-aws/testing.html)?
 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17451) intermittent failure of S3A huge file upload tests: count of bytes uploaded == 0

2021-01-05 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17259055#comment-17259055
 ] 

Steve Loughran commented on HADOOP-17451:
-

put byte count == 0
{code}
[ERROR] 
test_010_CreateHugeFile(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFilesDiskBlocks)
  Time elapsed: 6.746 s  <<< FAILURE!
java.lang.AssertionError: 
[putByteCount count from filesystem stats counters=((files_created=1) 
(stream_write_block_uploads_aborted=0) (committer_commits_reverted=0) 
(action_http_get_request.failures=0) (committer_magic_files_created=0) 
(object_copy_requests=0) (stream_read_close_operations=0) (store_io_retry=0) 
(stream_write_block_uploads_committed=0) 
(committer_stage_file_upload.failures=0) 
(s3guard_metadatastore_authoritative_directories_updated=0) 
(delegation_token_issued=0) (action_http_head_request=0) (op_create=1) 
(stream_read_fully_operations=0) (committer_commits_completed=0) 
(stream_read_seek_policy_changed=0) (committer_commits_created=0) 
(s3guard_metadatastore_put_path_request=2) (op_get_delegation_token=0) 
(stream_write_exceptions=0) (directories_created=1) (files_delete_rejected=0) 
(stream_write_total_data=20971520) (action_http_get_request=0) 
(files_copied_bytes=0) (op_list_located_status=0) 
(object_bulk_delete_request=1) (committer_commits_aborted=0) 
(action_executor_acquired.failures=0) (committer_stage_file_upload=0) 
(action_http_head_request.failures=0) (stream_read_opened=0) (op_list_status=0) 
(stream_write_queue_duration.failures=0) (op_get_file_checksum=0) 
(ignored_errors=1) (committer_bytes_uploaded=0) (op_list_files=0) 
(files_deleted=0) (op_is_directory=0) (s3guard_metadatastore_throttled=0) 
(stream_read_seek_backward_operations=0) (multipart_upload_started=0) 
(stream_write_total_time=6687) (object_delete_request.failures=0) 
(fake_directories_created=0) (stream_read_seek_operations=0) 
(stream_read_seek_forward_operations=0) (object_put_bytes=10485760) 
(op_is_file=0) (store_io_request=0) (committer_commits.failures=0) 
(stream_write_block_uploads=4) (committer_commit_job=0) 
(object_delete_objects=2) (multipart_upload_part_put=0) (op_open=0) 
(s3guard_metadatastore_record_reads=5) (committer_commit_job.failures=0) 
(s3guard_metadatastore_initialization=1) (object_put_request=3) 
(multipart_upload_abort_under_path_invoked=0) 
(stream_read_bytes_backwards_on_seek=0) (multipart_upload_part_put_bytes=0) 
(stream_read_seek_bytes_discarded=0) (multipart_upload_aborted=0) 
(committer_bytes_committed=0) (committer_materialize_file=0) 
(object_metadata_request=0) (s3guard_metadatastore_retry=0) 
(object_put_request_completed=3) (op_create_non_recursive=0) 
(stream_write_queue_duration=2) (committer_jobs_completed=0) 
(multipart_instantiated=0) (stream_read_operations=0) 
(object_bulk_delete_request.failures=0) (fake_directories_deleted=2) 
(stream_aborted=0) (op_rename=0) (object_multipart_aborted=0) 
(op_get_file_status=0) (s3guard_metadatastore_record_deletes=0) 
(stream_read_total_bytes=0) (committer_materialize_file.failures=0) 
(op_glob_status=0) (delegation_token_issued.failures=0) 
(stream_read_exceptions=0) (action_executor_acquired=2) 
(stream_read_version_mismatches=0) (stream_write_bytes=10485760) (op_exists=0) 
(stream_write_exceptions_completing_upload=0) (object_select_requests=0) 
(object_delete_request=0) (object_multipart_initiated=1) 
(committer_jobs_failed=0) (stream_read_operations_incomplete=0) (op_delete=1) 
(stream_read_bytes=0) (object_list_request.failures=0) 
(object_continue_list_request.failures=0) 
(stream_read_bytes_discarded_in_abort=0) (committer_tasks_completed=0) 
(object_list_request=0) (store_io_throttled=0) (files_copied=0) 
(committer_tasks_failed=0) (s3guard_metadatastore_record_writes=4) 
(stream_read_seek_bytes_skipped=0) (multipart_upload_completed=0) 
(object_continue_list_request=0) (op_mkdirs=1) (op_copy_from_local_file=0) 
(stream_read_closed=0) (directories_deleted=0) 
(stream_read_bytes_discarded_in_close=0));
gauges=();
minimums=((delegation_token_issued.failures.min=-1) 
(stream_write_queue_duration.min=-1) (action_executor_acquired.min=0) 
(object_list_request.min=-1) (object_continue_list_request.failures.min=-1) 
(object_list_request.failures.min=-1) 
(stream_write_queue_duration.failures.min=-1) 
(committer_stage_file_upload.min=-1) (committer_materialize_file.min=-1) 
(action_http_head_request.min=-1) (object_bulk_delete_request.failures.min=-1) 
(object_bulk_delete_request.min=92) (object_delete_request.failures.min=-1) 
(action_http_get_request.failures.min=-1) (delegation_token_issued.min=-1) 
(object_continue_list_request.min=-1) (object_delete_request.min=-1) 
(committer_commit_job.min=-1) (committer_commit_job.failures.min=-1) 
(action_http_get_request.min=-1) (committer_materialize_file.failures.min=-1) 
(committer_stage_file_upload.failures.min=-1) 
(action_executor_acquired.failures.min=-1) 

[jira] [Work logged] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16080?focusedWorklogId=531346&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531346
 ]

ASF GitHub Bot logged work on HADOOP-16080:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 17:03
Start Date: 05/Jan/21 17:03
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2575:
URL: https://github.com/apache/hadoop/pull/2575#issuecomment-754765770


   Usual due diligence query: which s3 endpoint did you run the integration 
tests against?
   
   (I'll expect some tests failures there from HADOOP-16380 stabilisation; if 
you don't find them I'd be worried about your test setup...they won't be 
blockers)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531346)
Time Spent: 5h 50m  (was: 5h 40m)

> hadoop-aws does not work with hadoop-client-api
> ---
>
> Key: HADOOP-16080
> URL: https://issues.apache.org/jira/browse/HADOOP-16080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Keith Turner
>Assignee: Chao Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2, 3.3.1
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> I attempted to use Accumulo and S3a with the following jars on the classpath.
>  * hadoop-client-api-3.1.1.jar
>  * hadoop-client-runtime-3.1.1.jar
>  * hadoop-aws-3.1.1.jar
> This failed with the following exception.
> {noformat}
> Exception in thread "init" java.lang.NoSuchMethodError: 
> org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
> at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108)
> at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413)
> at 
> org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184)
> at 
> org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479)
> at 
> org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487)
> at 
> org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370)
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348)
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967)
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The problem is that {{S3AFileSystem.create()}} looks for 
> {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}}
>  which does not exist in hadoop-client-api-3.1.1.jar.  What does exist is 
> {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}.
> To work around this issue I created a version of hadoop-aws-3.1.1.jar that 
> relocated references to Guava.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2575: HADOOP-16080. hadoop-aws does not work with hadoop-client-api

2021-01-05 Thread GitBox


steveloughran commented on pull request #2575:
URL: https://github.com/apache/hadoop/pull/2575#issuecomment-754765770


   Usual due diligence query: which s3 endpoint did you run the integration 
tests against?
   
   (I'll expect some tests failures there from HADOOP-16380 stabilisation; if 
you don't find them I'd be worried about your test setup...they won't be 
blockers)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17456) S3A ITestPartialRenamesDeletes.testPartialDirDelete[bulk-delete=true] failure

2021-01-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17456:

Parent Issue: HADOOP-16830  (was: HADOOP-16829)

> S3A ITestPartialRenamesDeletes.testPartialDirDelete[bulk-delete=true] failure
> -
>
> Key: HADOOP-17456
> URL: https://issues.apache.org/jira/browse/HADOOP-17456
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>
> Failure in {{ITestPartialRenamesDeletes.testPartialDirDelete}}; wrong #of 
> delete requests. 
> build options: -Dparallel-tests -DtestsThreadCount=6 -Dscale -Dmarkers=delete 
> -Ds3guard -Ddynamo
> The assert fails on a line changes in HADOOP-17271; assumption being, there 
> are some test run states where things happen differently. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17454:

Release Note: S3A bucket existence check is disabled (fs.s3a.bucket.probe 
is 0), so there will be no existence check on the bucket during the 
S3AFileSystem initialization. The first operation which attempts to interact 
with the bucket which will fail if the bucket does not exist.  (was: S3A bucket 
existence check is disabled (fs.s3a.bucket.probe is 0), so there will be no 
existence check on the bucket during the S3AFileSystem initialization. The 
first operation will fail if the bucket does not exist.)

> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Set the value of fs.s3a.bucket.probe to 0 by default.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17414?focusedWorklogId=531291&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531291
 ]

ASF GitHub Bot logged work on HADOOP-17414:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 15:12
Start Date: 05/Jan/21 15:12
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530#issuecomment-754696434


   Rebased, full test run. Failures of HADOOP-17403 and HADOOP-17451; both 
unrelated



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531291)
Time Spent: 3.5h  (was: 3h 20m)

> Magic committer files don't have the count of bytes written collected by spark
> --
>
> Key: HADOOP-17414
> URL: https://issues.apache.org/jira/browse/HADOOP-17414
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> The spark statistics tracking doesn't correctly assess the size of the 
> uploaded files as it only calls getFileStatus on the zero byte objects -not 
> the yet-to-manifest files. Which, given they don't exist yet, isn't easy to 
> do.
> Solution: 
> * Add getXAttr and listXAttr API calls to S3AFileSystem
> * Return all S3 object headers as XAttr attributes prefixed "header." That's 
> custom and standard (e.g header.Content-Length).
> The setXAttr call isn't implemented, so for correctness the FS doesn't
> declare its support for the API in hasPathCapability().
> The magic commit file write sets the custom header 
> set the length of the data final data in the header
> x-hadoop-s3a-magic-data-length in the marker file.
> A matching patch in Spark will look for the XAttr
> "header.x-hadoop-s3a-magic-data-length" when the file
> being probed for output data is zero byte long. 
> As a result, the job tracking statistics will report the
> bytes written but yet to be manifest.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2530: HADOOP-17414. Magic committer files don't have the count of bytes written collected by spark

2021-01-05 Thread GitBox


steveloughran commented on pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530#issuecomment-754696434


   Rebased, full test run. Failures of HADOOP-17403 and HADOOP-17451; both 
unrelated



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17456) S3A ITestPartialRenamesDeletes.testPartialDirDelete[bulk-delete=true] failure

2021-01-05 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17456:
---

 Summary: S3A 
ITestPartialRenamesDeletes.testPartialDirDelete[bulk-delete=true] failure
 Key: HADOOP-17456
 URL: https://issues.apache.org/jira/browse/HADOOP-17456
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.4.0
Reporter: Steve Loughran
Assignee: Steve Loughran


Failure in {{ITestPartialRenamesDeletes.testPartialDirDelete}}; wrong #of 
delete requests. 

build options: -Dparallel-tests -DtestsThreadCount=6 -Dscale -Dmarkers=delete 
-Ds3guard -Ddynamo

The assert fails on a line changes in HADOOP-17271; assumption being, there are 
some test run states where things happen differently. 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17403) S3A ITestPartialRenamesDeletes.testRenameDirFailsInDelete failure: missing directory marker

2021-01-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17403:

Summary: S3A ITestPartialRenamesDeletes.testRenameDirFailsInDelete failure: 
missing directory marker  (was: S3A ITestPartialRenamesDeletes failure: missing 
directory marker)

> S3A ITestPartialRenamesDeletes.testRenameDirFailsInDelete failure: missing 
> directory marker
> ---
>
> Key: HADOOP-17403
> URL: https://issues.apache.org/jira/browse/HADOOP-17403
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.1
>
>
> Seemingly transient failure of the test ITestPartialRenamesDeletes with the 
> latest HADOOP-17244 changes in: an expected directory marker was not found.
> Test run was (unintentionally) sequential, markers=delete, s3guard on
> {code}
> -Dmarkers=delete -Ds3guard -Ddynamo -Dscale 
> {code}
> Hasn't come back since.
> The bucket's retention policy was authoritative, but no dirs were declared as 
> such



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17403) S3A ITestPartialRenamesDeletes failure: missing directory marker

2021-01-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17403.
-
Fix Version/s: 3.3.1
 Assignee: Steve Loughran
   Resolution: Workaround

> S3A ITestPartialRenamesDeletes failure: missing directory marker
> 
>
> Key: HADOOP-17403
> URL: https://issues.apache.org/jira/browse/HADOOP-17403
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 3.3.1
>
>
> Seemingly transient failure of the test ITestPartialRenamesDeletes with the 
> latest HADOOP-17244 changes in: an expected directory marker was not found.
> Test run was (unintentionally) sequential, markers=delete, s3guard on
> {code}
> -Dmarkers=delete -Ds3guard -Ddynamo -Dscale 
> {code}
> Hasn't come back since.
> The bucket's retention policy was authoritative, but no dirs were declared as 
> such



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17403) S3A ITestPartialRenamesDeletes failure: missing directory marker

2021-01-05 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17258986#comment-17258986
 ] 

Steve Loughran commented on HADOOP-17403:
-

seen during some backporting and made go away. 

I think it comes from me having a bucket config of delete batch size == 50, on 
a -Dscale test run there are > 50 entries to delete, but only that first 50 are 
included; assertions about what is found/deleted are incorrect.

I'm going close as WORKSFORME as when I change page size to 150 all is good.

> S3A ITestPartialRenamesDeletes failure: missing directory marker
> 
>
> Key: HADOOP-17403
> URL: https://issues.apache.org/jira/browse/HADOOP-17403
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Priority: Minor
>
> Seemingly transient failure of the test ITestPartialRenamesDeletes with the 
> latest HADOOP-17244 changes in: an expected directory marker was not found.
> Test run was (unintentionally) sequential, markers=delete, s3guard on
> {code}
> -Dmarkers=delete -Ds3guard -Ddynamo -Dscale 
> {code}
> Hasn't come back since.
> The bucket's retention policy was authoritative, but no dirs were declared as 
> such



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17455) [s3a] Intermittent failure of ITestS3ADeleteCost.testDeleteSingleFileInDir

2021-01-05 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17258982#comment-17258982
 ] 

Steve Loughran commented on HADOOP-17455:
-

object_delete_objects expected:<2> but was:<3>

{code}
ERROR]   
ITestS3ADeleteCost.testDeleteSingleFileInDir:110->AbstractS3ACostTest.verifyMetrics:360->Assert.assertEquals:645->Assert.failNotEquals:834->Assert.fail:88
 operation returning after fs.delete(simpleFile) action_executor_acquired 
starting=0 current=0 diff=0, action_http_get_request starting=0 current=0 
diff=0, action_http_head_request starting=4 current=5 diff=1, 
committer_bytes_committed starting=0 current=0 diff=0, committer_bytes_uploaded 
starting=0 current=0 diff=0, committer_commit_job starting=0 current=0 diff=0, 
committer_commits.failures starting=0 current=0 diff=0, 
committer_commits_aborted starting=0 current=0 diff=0, 
committer_commits_completed starting=0 current=0 diff=0, 
committer_commits_created starting=0 current=0 diff=0, 
committer_commits_reverted starting=0 current=0 diff=0, 
committer_jobs_completed starting=0 current=0 diff=0, committer_jobs_failed 
starting=0 current=0 diff=0, committer_magic_files_created starting=0 current=0 
diff=0, committer_materialize_file starting=0 current=0 diff=0, 
committer_stage_file_upload starting=0 current=0 diff=0, 
committer_tasks_completed starting=0 current=0 diff=0, committer_tasks_failed 
starting=0 current=0 diff=0, delegation_token_issued starting=0 current=0 
diff=0, directories_created starting=2 current=3 diff=1, directories_deleted 
starting=0 current=0 diff=0, fake_directories_created starting=0 current=0 
diff=0, fake_directories_deleted starting=6 current=8 diff=2, files_copied 
starting=0 current=0 diff=0, files_copied_bytes starting=0 current=0 diff=0, 
files_created starting=1 current=1 diff=0, files_delete_rejected starting=0 
current=0 diff=0, files_deleted starting=0 current=1 diff=1, ignored_errors 
starting=0 current=0 diff=0, multipart_instantiated starting=0 current=0 
diff=0, multipart_upload_abort_under_path_invoked starting=0 current=0 diff=0, 
multipart_upload_aborted starting=0 current=0 diff=0, 
multipart_upload_completed starting=0 current=0 diff=0, 
multipart_upload_part_put starting=0 current=0 diff=0, 
multipart_upload_part_put_bytes starting=0 current=0 diff=0, 
multipart_upload_started starting=0 current=0 diff=0, 
object_bulk_delete_request starting=3 current=4 diff=1, 
object_continue_list_request starting=0 current=0 diff=0, object_copy_requests 
starting=0 current=0 diff=0, object_delete_objects starting=6 current=9 diff=3, 
object_delete_request starting=0 current=1 diff=1, object_list_request 
starting=5 current=6 diff=1, object_metadata_request starting=4 current=5 
diff=1, object_multipart_aborted starting=0 current=0 diff=0, 
object_multipart_initiated starting=0 current=0 diff=0, object_put_bytes 
starting=0 current=0 diff=0, object_put_request starting=3 current=4 diff=1, 
object_put_request_completed starting=3 current=4 diff=1, 
object_select_requests starting=0 current=0 diff=0, op_copy_from_local_file 
starting=0 current=0 diff=0, op_create starting=1 current=1 diff=0, 
op_create_non_recursive starting=0 current=0 diff=0, op_delete starting=0 
current=1 diff=1, op_exists starting=0 current=0 diff=0, 
op_get_delegation_token starting=0 current=0 diff=0, op_get_file_checksum 
starting=0 current=0 diff=0, op_get_file_status starting=2 current=2 diff=0, 
op_glob_status starting=0 current=0 diff=0, op_is_directory starting=0 
current=0 diff=0, op_is_file starting=0 current=0 diff=0, op_list_files 
starting=0 current=0 diff=0, op_list_located_status starting=0 current=0 
diff=0, op_list_status starting=0 current=0 diff=0, op_mkdirs starting=2 
current=2 diff=0, op_open starting=0 current=0 diff=0, op_rename starting=0 
current=0 diff=0, s3guard_metadatastore_authoritative_directories_updated 
starting=0 current=0 diff=0, s3guard_metadatastore_initialization starting=0 
current=0 diff=0, s3guard_metadatastore_put_path_request starting=0 current=0 
diff=0, s3guard_metadatastore_record_deletes starting=0 current=0 diff=0, 
s3guard_metadatastore_record_reads starting=0 current=0 diff=0, 
s3guard_metadatastore_record_writes starting=0 current=0 diff=0, 
s3guard_metadatastore_retry starting=0 current=0 diff=0, 
s3guard_metadatastore_throttled starting=0 current=0 diff=0, store_io_request 
starting=0 current=0 diff=0, store_io_retry starting=0 current=0 diff=0, 
store_io_throttled starting=0 current=0 diff=0, stream_aborted starting=0 
current=0 diff=0, stream_read_bytes starting=0 current=0 diff=0, 
stream_read_bytes_backwards_on_seek starting=0 current=0 diff=0, 
stream_read_bytes_discarded_in_abort starting=0 current=0 diff=0, 
stream_read_bytes_discarded_in_close starting=0 current=0 diff=0, 
stream_read_close_operations starting=0 current=0 diff=0, stream_read_closed 
starting=0 cu

[jira] [Updated] (HADOOP-17455) [s3a] Intermittent failure of ITestS3ADeleteCost.testDeleteSingleFileInDir

2021-01-05 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17455:

Parent: HADOOP-16829
Issue Type: Sub-task  (was: Improvement)

> [s3a] Intermittent failure of ITestS3ADeleteCost.testDeleteSingleFileInDir
> --
>
> Key: HADOOP-17455
> URL: https://issues.apache.org/jira/browse/HADOOP-17455
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Steve Loughran
>Priority: Major
>
> Test failed against ireland intermittently with the following config:
> {{mvn clean verify -Dparallel-tests -DtestsThreadCount=8}}
> xml based config in auth-keys.xml:
> {code:xml}
> 
> fs.s3a.metadatastore.impl
> org.apache.hadoop.fs.s3a.s3guard.NullMetadataStore
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16995) ITestS3AConfiguration proxy tests fail when bucket probes == 0

2021-01-05 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16995.
-
Resolution: Fixed

got +1 from Steve on the PR, committed to trunk 

> ITestS3AConfiguration proxy tests fail when bucket probes == 0
> --
>
> Key: HADOOP-16995
> URL: https://issues.apache.org/jira/browse/HADOOP-16995
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> when bucket probes are disabled, proxy config tests in ITestS3AConfiguration 
> fail because the probes aren't being attempted in initialize()
> {code}
>   
> fs.s3a.bucket.probe
> 0
>  
> {code}
> Cause: HADOOP-16711
> Fix: call unsetBaseAndBucketOverrides for bucket probe in test conf, then set 
> the probe value to 2 just to be resilient to future default changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-17454.
-
Resolution: Fixed

got +1 from Steve on PR, committed to trunk

> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Set the value of fs.s3a.bucket.probe to 0 by default.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?focusedWorklogId=531268&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531268
 ]

ASF GitHub Bot logged work on HADOOP-17454:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 14:43
Start Date: 05/Jan/21 14:43
Worklog Time Spent: 10m 
  Work Description: bgaborg merged pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531268)
Time Spent: 1h 20m  (was: 1h 10m)

> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Set the value of fs.s3a.bucket.probe to 0 by default.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg merged pull request #2593: HADOOP-17454. [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread GitBox


bgaborg merged pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?focusedWorklogId=531264&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531264
 ]

ASF GitHub Bot logged work on HADOOP-17454:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 14:41
Start Date: 05/Jan/21 14:41
Worklog Time Spent: 10m 
  Work Description: bgaborg commented on pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593#issuecomment-754676479


   thanks Steve. created https://issues.apache.org/jira/browse/HADOOP-17455
   please check the release notes for this issue on jira- I think I filled it 
the right way



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531264)
Time Spent: 1h 10m  (was: 1h)

> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Set the value of fs.s3a.bucket.probe to 0 by default.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg commented on pull request #2593: HADOOP-17454. [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread GitBox


bgaborg commented on pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593#issuecomment-754676479


   thanks Steve. created https://issues.apache.org/jira/browse/HADOOP-17455
   please check the release notes for this issue on jira- I think I filled it 
the right way



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17455) [s3a] Intermittent failure of ITestS3ADeleteCost.testDeleteSingleFileInDir

2021-01-05 Thread Gabor Bota (Jira)
Gabor Bota created HADOOP-17455:
---

 Summary: [s3a] Intermittent failure of 
ITestS3ADeleteCost.testDeleteSingleFileInDir
 Key: HADOOP-17455
 URL: https://issues.apache.org/jira/browse/HADOOP-17455
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.3.0
Reporter: Gabor Bota
Assignee: Steve Loughran


Test failed against ireland intermittently with the following config:

{{mvn clean verify -Dparallel-tests -DtestsThreadCount=8}}
xml based config in auth-keys.xml:
{code:xml}

fs.s3a.metadatastore.impl
org.apache.hadoop.fs.s3a.s3guard.NullMetadataStore

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?focusedWorklogId=531207&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531207
 ]

ASF GitHub Bot logged work on HADOOP-17454:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 13:28
Start Date: 05/Jan/21 13:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593#issuecomment-754636231


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  6s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  4s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m  5s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 35s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  79m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2593/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2593 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 5faf99842e38 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2b4febcf576 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2593/1/testReport/ |
   | Max. process+thread count | 605 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2593/1/console |
   | versions | git=2.17.1 maven=3.6.0 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2593: HADOOP-17454. [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread GitBox


hadoop-yetus commented on pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593#issuecomment-754636231


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  6s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  4s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m  5s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 35s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  79m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2593/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2593 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 5faf99842e38 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2b4febcf576 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2593/1/testReport/ |
   | Max. process+thread count | 605 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2593/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infr

[jira] [Work logged] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17338?focusedWorklogId=531168&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531168
 ]

ASF GitHub Bot logged work on HADOOP-17338:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 12:47
Start Date: 05/Jan/21 12:47
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2497:
URL: https://github.com/apache/hadoop/pull/2497#issuecomment-754614490


   yes, use unbuffer if you can. It frees up the HTTPS connection. And while 
AWS S3 won't have problems, its probably good for other S3 stores as it will 
reduce the #of open connections the server has to maintain



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531168)
Time Spent: 4h 20m  (was: 4h 10m)

> Intermittent S3AInputStream failures: Premature end of Content-Length 
> delimited message body etc
> 
>
> Key: HADOOP-17338
> URL: https://issues.apache.org/jira/browse/HADOOP-17338
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
> Attachments: HADOOP-17338.001.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> We are seeing the following two kinds of intermittent exceptions when using 
> S3AInputSteam:
> 1.
> {code:java}
> Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: 
> Premature end of Content-Length delimited message body (expected: 156463674; 
> received: 150001089
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
> at 
> com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779)
> at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511)
> at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130)
> at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214)
> at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227)
> at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208)
> at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:63)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:350)
> ... 15 more
> {code}
> 2.
> {code:java}
> Caused by: javax.net.ssl.SSLException: SSL peer shut down incorrectly
> at sun.security.ssl.InputRecord.readV3Record(InputRecord.java:596)
> at sun.security.ssl.InputRecord.read(InputRecord.java:532)
> at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:990)
> at sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:948)
> at sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> at 
> com.amazonaws.thirdparty.apache.http.impl.

[GitHub] [hadoop] steveloughran commented on pull request #2497: HADOOP-17338. Intermittent S3AInputStream failures: Premature end of …

2021-01-05 Thread GitBox


steveloughran commented on pull request #2497:
URL: https://github.com/apache/hadoop/pull/2497#issuecomment-754614490


   yes, use unbuffer if you can. It frees up the HTTPS connection. And while 
AWS S3 won't have problems, its probably good for other S3 stores as it will 
reduce the #of open connections the server has to maintain



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16202) Stabilize openFile() and adopt internally

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=531165&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531165
 ]

ASF GitHub Bot logged work on HADOOP-16202:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 12:45
Start Date: 05/Jan/21 12:45
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2584:
URL: https://github.com/apache/hadoop/pull/2584#issuecomment-754613375


   MR client not compiling; not seeing useful information from yetus.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531165)
Time Spent: 5h 40m  (was: 5.5h)

> Stabilize openFile() and adopt internally
> -
>
> Key: HADOOP-16202
> URL: https://issues.apache.org/jira/browse/HADOOP-16202
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3, tools/distcp
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> The {{openFile()}} builder API lets us add new options when reading a file
> Add an option {{"fs.s3a.open.option.length"}} which takes a long and allows 
> the length of the file to be declared. If set, *no check for the existence of 
> the file is issued when opening the file*
> Also: withFileStatus() to take any FileStatus implementation, rather than 
> only S3AFileStatus -and not check that the path matches the path being 
> opened. Needed to support viewFS-style wrapping and mounting.
> and Adopt where appropriate to stop clusters with S3A reads switched to 
> random IO from killing download/localization
> * fs shell copyToLocal
> * distcp
> * IOUtils.copy



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2584: HADOOP-16202. Enhance openFile()

2021-01-05 Thread GitBox


steveloughran commented on pull request #2584:
URL: https://github.com/apache/hadoop/pull/2584#issuecomment-754613375


   MR client not compiling; not seeing useful information from yetus.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?focusedWorklogId=531158&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531158
 ]

ASF GitHub Bot logged work on HADOOP-17454:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 12:29
Start Date: 05/Jan/21 12:29
Worklog Time Spent: 10m 
  Work Description: steveloughran edited a comment on pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593#issuecomment-754605704


   +1 pending successful Yetus build. I've turned off existence checks for a 
long time and makes things faster, especially creation of a new FS instance in 
test suites.
   
   Do update the release notes though, to say "you'll get told of 
missing/unreadable bucket on first operation against it".
   
   Test failure is going to be related to directory marker retention and/or 
metric counting.
   
   Could you file a JIRA, assign to me and include the specific test settings 
you were using?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531158)
Time Spent: 50m  (was: 40m)

> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Set the value of fs.s3a.bucket.probe to 0 by default.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran edited a comment on pull request #2593: HADOOP-17454. [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread GitBox


steveloughran edited a comment on pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593#issuecomment-754605704


   +1 pending successful Yetus build. I've turned off existence checks for a 
long time and makes things faster, especially creation of a new FS instance in 
test suites.
   
   Do update the release notes though, to say "you'll get told of 
missing/unreadable bucket on first operation against it".
   
   Test failure is going to be related to directory marker retention and/or 
metric counting.
   
   Could you file a JIRA, assign to me and include the specific test settings 
you were using?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?focusedWorklogId=531157&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531157
 ]

ASF GitHub Bot logged work on HADOOP-17454:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 12:28
Start Date: 05/Jan/21 12:28
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593#issuecomment-754605704


   +1. Do update the release notes though, to say "you'll get told of 
missing/unreadable bucket on first operation against it".
   
   Test failure is going to be related to directory marker retention and/or 
metric counting.
   
   Could you file a JIRA, assign to me and include the specific test settings 
you were using?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531157)
Time Spent: 40m  (was: 0.5h)

> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Set the value of fs.s3a.bucket.probe to 0 by default.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2593: HADOOP-17454. [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread GitBox


steveloughran commented on pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593#issuecomment-754605704


   +1. Do update the release notes though, to say "you'll get told of 
missing/unreadable bucket on first operation against it".
   
   Test failure is going to be related to directory marker retention and/or 
metric counting.
   
   Could you file a JIRA, assign to me and include the specific test settings 
you were using?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16492) Support HuaweiCloud Object Storage as a Hadoop Backend File System

2021-01-05 Thread Junping Du (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17258866#comment-17258866
 ] 

Junping Du commented on HADOOP-16492:
-

Thanks for comments, [~ste...@apache.org]. The refactoring work looks 
reasonable to me. 
Do we have a time estimation on how this s3a optimization work get done? If 
not, I suggest we can we get patch here in and file a separate jira to track 
the optimization work.

> Support HuaweiCloud Object Storage as a Hadoop Backend File System
> --
>
> Key: HADOOP-16492
> URL: https://issues.apache.org/jira/browse/HADOOP-16492
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 3.4.0
>Reporter: zhongjun
>Assignee: zhongjun
>Priority: Major
> Attachments: Difference Between OBSA and S3A.pdf, 
> HADOOP-16492.001.patch, HADOOP-16492.002.patch, HADOOP-16492.003.patch, 
> HADOOP-16492.004.patch, HADOOP-16492.005.patch, HADOOP-16492.006.patch, 
> HADOOP-16492.007.patch, HADOOP-16492.008.patch, HADOOP-16492.009.patch, 
> HADOOP-16492.010.patch, HADOOP-16492.011.patch, HADOOP-16492.012.patch, 
> HADOOP-16492.013.patch, HADOOP-16492.014.patch, HADOOP-16492.015.patch, 
> HADOOP-16492.016.patch, HADOOP-16492.017.patch, OBSA HuaweiCloud OBS Adapter 
> for Hadoop Support.pdf, image-2020-11-21-18-51-51-981.png
>
>
> Added support for HuaweiCloud OBS 
> ([https://www.huaweicloud.com/en-us/product/obs.html]) to Hadoop file system, 
> just like what we do before for S3, ADLS, OSS, etc. With simple 
> configuration, Hadoop applications can read/write data from OBS without any 
> code change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?focusedWorklogId=531146&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531146
 ]

ASF GitHub Bot logged work on HADOOP-17454:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 12:08
Start Date: 05/Jan/21 12:08
Worklog Time Spent: 10m 
  Work Description: bgaborg opened a new pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593


   Also fixes HADOOP-16995. ITestS3AConfiguration proxy tests failures when 
bucket probes == 0
   The improvement should include the fix, ebcause the test would fail by 
default otherwise.
   
   Change-Id: I9a7e4b5e6d4391ebba096c15e84461c038a2ec59
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 531146)
Remaining Estimate: 0h
Time Spent: 10m

> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Set the value of fs.s3a.bucket.probe to 0 by default.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?focusedWorklogId=531149&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531149
 ]

ASF GitHub Bot logged work on HADOOP-17454:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 12:11
Start Date: 05/Jan/21 12:11
Worklog Time Spent: 10m 
  Work Description: bgaborg edited a comment on pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593#issuecomment-754598161


   tested against ireland.
   
   Testing: I had one failing test during the CLI run, but it was ok after a 
rerun:
   ```
   [ERROR]   
ITestS3ADeleteCost.testDeleteSingleFileInDir:110->AbstractS3ACostTest.verifyMetrics:360->Assert.assertEquals:645->Assert.failNotEquals:834->Assert.fail:88
 operation returning after fs.delete(simpleFile) action_executor_acquired 
starting=0 current=0 diff=0, action_http_get_request starting=0 current=0 
diff=0, action_http_head_request starting=4 current=5 diff=1, 
committer_bytes_committed starting=0 current=0 diff=0, committer_bytes_uploaded 
starting=0 current=0 diff=0, committer_commit_job starting=0 current=0 diff=0, 
committer_commits.failures starting=0 current=0 diff=0, 
committer_commits_aborted starting=0 current=0 diff=0, 
committer_commits_completed starting=0 current=0 diff=0, 
committer_commits_created starting=0 current=0 diff=0, 
committer_commits_reverted starting=0 current=0 diff=0, 
committer_jobs_completed starting=0 current=0 diff=0, committer_jobs_failed 
starting=0 current=0 diff=0, committer_magic_files_created starting=0 current=0 
diff=0, committer_materialize_file starting=0 current=0 diff=0, 
committer_stage_file_upload starting=0 current=0 diff=0, 
committer_tasks_completed starting=0 current=0 diff=0, committer_tasks_failed 
starting=0 current=0 diff=0, delegation_token_issued starting=0 current=0 
diff=0, directories_created starting=2 current=3 diff=1, directories_deleted 
starting=0 current=0 diff=0, fake_directories_created starting=0 current=0 
diff=0, fake_directories_deleted starting=6 current=8 diff=2, files_copied 
starting=0 current=0 diff=0, files_copied_bytes starting=0 current=0 diff=0, 
files_created starting=1 current=1 diff=0, files_delete_rejected starting=0 
current=0 diff=0, files_deleted starting=0 current=1 diff=1, ignored_errors 
starting=0 current=0 diff=0, multipart_instantiated starting=0 current=0 
diff=0, multipart_upload_abort_under_path_invoked starting=0 current=0 diff=0, 
multipart_upload_aborted starting=0 current=0 diff=0, 
multipart_upload_completed starting=0 current=0 diff=0, 
multipart_upload_part_put starting=0 current=0 diff=0, 
multipart_upload_part_put_bytes starting=0 current=0 diff=0, 
multipart_upload_started starting=0 current=0 diff=0, 
object_bulk_delete_request starting=3 current=4 diff=1, 
object_continue_list_request starting=0 current=0 diff=0, object_copy_requests 
starting=0 current=0 diff=0, object_delete_objects starting=6 current=9 diff=3, 
object_delete_request starting=0 current=1 diff=1, object_list_request 
starting=5 current=6 diff=1, object_metadata_request starting=4 current=5 
diff=1, object_multipart_aborted starting=0 current=0 diff=0, 
object_multipart_initiated starting=0 current=0 diff=0, object_put_bytes 
starting=0 current=0 diff=0, object_put_request starting=3 current=4 diff=1, 
object_put_request_completed starting=3 current=4 diff=1, 
object_select_requests starting=0 current=0 diff=0, op_copy_from_local_file 
starting=0 current=0 diff=0, op_create starting=1 current=1 diff=0, 
op_create_non_recursive starting=0 current=0 diff=0, op_delete starting=0 
current=1 diff=1, op_exists starting=0 current=0 diff=0, 
op_get_delegation_token starting=0 current=0 diff=0, op_get_file_checksum 
starting=0 current=0 diff=0, op_get_file_status starting=2 current=2 diff=0, 
op_glob_status starting=0 current=0 diff=0, op_is_directory starting=0 
current=0 diff=0, op_is_file starting=0 current=0 diff=0, op_list_files 
starting=0 current=0 diff=0, op_list_located_status starting=0 current=0 
diff=0, op_list_status starting=0 current=0 diff=0, op_mkdirs starting=2 
current=2 diff=0, op_open starting=0 current=0 diff=0, op_rename starting=0 
current=0 diff=0, s3guard_metadatastore_authoritative_directories_updated 
starting=0 current=0 diff=0, s3guard_metadatastore_initialization starting=0 
current=0 diff=0, s3guard_metadatastore_put_path_request starting=0 current=0 
diff=0, s3guard_metadatastore_record_deletes starting=0 current=0 diff=0, 
s3guard_metadatastore_record_reads starting=0 current=0 diff=0, 
s3guard_metadatastore_record_writes starting=0 current=0 diff=0, 
s3guard_metadatastore_retry starting=0 current=0 diff=0, 
s3guard_metadatastore_throttled starting=0 current=0 diff=0, store_io_request 
starting=0 current=0 diff=0, store_io_retry starting=0 current=0 diff=0, 
store_io_throttled starting=0 current=0 diff=0, stream_aborted star

[jira] [Work logged] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?focusedWorklogId=531148&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-531148
 ]

ASF GitHub Bot logged work on HADOOP-17454:
---

Author: ASF GitHub Bot
Created on: 05/Jan/21 12:11
Start Date: 05/Jan/21 12:11
Worklog Time Spent: 10m 
  Work Description: bgaborg commented on pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593#issuecomment-754598161


   Testing: I had one failing test during the CLI run, but it was ok after a 
rerun:
   ```
   [ERROR]   
ITestS3ADeleteCost.testDeleteSingleFileInDir:110->AbstractS3ACostTest.verifyMetrics:360->Assert.assertEquals:645->Assert.failNotEquals:834->Assert.fail:88
 operation returning after fs.delete(simpleFile) action_executor_acquired 
starting=0 current=0 diff=0, action_http_get_request starting=0 current=0 
diff=0, action_http_head_request starting=4 current=5 diff=1, 
committer_bytes_committed starting=0 current=0 diff=0, committer_bytes_uploaded 
starting=0 current=0 diff=0, committer_commit_job starting=0 current=0 diff=0, 
committer_commits.failures starting=0 current=0 diff=0, 
committer_commits_aborted starting=0 current=0 diff=0, 
committer_commits_completed starting=0 current=0 diff=0, 
committer_commits_created starting=0 current=0 diff=0, 
committer_commits_reverted starting=0 current=0 diff=0, 
committer_jobs_completed starting=0 current=0 diff=0, committer_jobs_failed 
starting=0 current=0 diff=0, committer_magic_files_created starting=0 current=0 
diff=0, committer_materialize_file starting=0 current=0 diff=0, 
committer_stage_file_upload starting=0 current=0 diff=0, 
committer_tasks_completed starting=0 current=0 diff=0, committer_tasks_failed 
starting=0 current=0 diff=0, delegation_token_issued starting=0 current=0 
diff=0, directories_created starting=2 current=3 diff=1, directories_deleted 
starting=0 current=0 diff=0, fake_directories_created starting=0 current=0 
diff=0, fake_directories_deleted starting=6 current=8 diff=2, files_copied 
starting=0 current=0 diff=0, files_copied_bytes starting=0 current=0 diff=0, 
files_created starting=1 current=1 diff=0, files_delete_rejected starting=0 
current=0 diff=0, files_deleted starting=0 current=1 diff=1, ignored_errors 
starting=0 current=0 diff=0, multipart_instantiated starting=0 current=0 
diff=0, multipart_upload_abort_under_path_invoked starting=0 current=0 diff=0, 
multipart_upload_aborted starting=0 current=0 diff=0, 
multipart_upload_completed starting=0 current=0 diff=0, 
multipart_upload_part_put starting=0 current=0 diff=0, 
multipart_upload_part_put_bytes starting=0 current=0 diff=0, 
multipart_upload_started starting=0 current=0 diff=0, 
object_bulk_delete_request starting=3 current=4 diff=1, 
object_continue_list_request starting=0 current=0 diff=0, object_copy_requests 
starting=0 current=0 diff=0, object_delete_objects starting=6 current=9 diff=3, 
object_delete_request starting=0 current=1 diff=1, object_list_request 
starting=5 current=6 diff=1, object_metadata_request starting=4 current=5 
diff=1, object_multipart_aborted starting=0 current=0 diff=0, 
object_multipart_initiated starting=0 current=0 diff=0, object_put_bytes 
starting=0 current=0 diff=0, object_put_request starting=3 current=4 diff=1, 
object_put_request_completed starting=3 current=4 diff=1, 
object_select_requests starting=0 current=0 diff=0, op_copy_from_local_file 
starting=0 current=0 diff=0, op_create starting=1 current=1 diff=0, 
op_create_non_recursive starting=0 current=0 diff=0, op_delete starting=0 
current=1 diff=1, op_exists starting=0 current=0 diff=0, 
op_get_delegation_token starting=0 current=0 diff=0, op_get_file_checksum 
starting=0 current=0 diff=0, op_get_file_status starting=2 current=2 diff=0, 
op_glob_status starting=0 current=0 diff=0, op_is_directory starting=0 
current=0 diff=0, op_is_file starting=0 current=0 diff=0, op_list_files 
starting=0 current=0 diff=0, op_list_located_status starting=0 current=0 
diff=0, op_list_status starting=0 current=0 diff=0, op_mkdirs starting=2 
current=2 diff=0, op_open starting=0 current=0 diff=0, op_rename starting=0 
current=0 diff=0, s3guard_metadatastore_authoritative_directories_updated 
starting=0 current=0 diff=0, s3guard_metadatastore_initialization starting=0 
current=0 diff=0, s3guard_metadatastore_put_path_request starting=0 current=0 
diff=0, s3guard_metadatastore_record_deletes starting=0 current=0 diff=0, 
s3guard_metadatastore_record_reads starting=0 current=0 diff=0, 
s3guard_metadatastore_record_writes starting=0 current=0 diff=0, 
s3guard_metadatastore_retry starting=0 current=0 diff=0, 
s3guard_metadatastore_throttled starting=0 current=0 diff=0, store_io_request 
starting=0 current=0 diff=0, store_io_retry starting=0 current=0 diff=0, 
store_io_throttled starting=0 current=0 diff=0, stream_aborted starting=0 
current=0 diff=0, stream_read_

[GitHub] [hadoop] bgaborg edited a comment on pull request #2593: HADOOP-17454. [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread GitBox


bgaborg edited a comment on pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593#issuecomment-754598161


   tested against ireland.
   
   Testing: I had one failing test during the CLI run, but it was ok after a 
rerun:
   ```
   [ERROR]   
ITestS3ADeleteCost.testDeleteSingleFileInDir:110->AbstractS3ACostTest.verifyMetrics:360->Assert.assertEquals:645->Assert.failNotEquals:834->Assert.fail:88
 operation returning after fs.delete(simpleFile) action_executor_acquired 
starting=0 current=0 diff=0, action_http_get_request starting=0 current=0 
diff=0, action_http_head_request starting=4 current=5 diff=1, 
committer_bytes_committed starting=0 current=0 diff=0, committer_bytes_uploaded 
starting=0 current=0 diff=0, committer_commit_job starting=0 current=0 diff=0, 
committer_commits.failures starting=0 current=0 diff=0, 
committer_commits_aborted starting=0 current=0 diff=0, 
committer_commits_completed starting=0 current=0 diff=0, 
committer_commits_created starting=0 current=0 diff=0, 
committer_commits_reverted starting=0 current=0 diff=0, 
committer_jobs_completed starting=0 current=0 diff=0, committer_jobs_failed 
starting=0 current=0 diff=0, committer_magic_files_created starting=0 current=0 
diff=0, committer_materiali
 ze_file starting=0 current=0 diff=0, committer_stage_file_upload starting=0 
current=0 diff=0, committer_tasks_completed starting=0 current=0 diff=0, 
committer_tasks_failed starting=0 current=0 diff=0, delegation_token_issued 
starting=0 current=0 diff=0, directories_created starting=2 current=3 diff=1, 
directories_deleted starting=0 current=0 diff=0, fake_directories_created 
starting=0 current=0 diff=0, fake_directories_deleted starting=6 current=8 
diff=2, files_copied starting=0 current=0 diff=0, files_copied_bytes starting=0 
current=0 diff=0, files_created starting=1 current=1 diff=0, 
files_delete_rejected starting=0 current=0 diff=0, files_deleted starting=0 
current=1 diff=1, ignored_errors starting=0 current=0 diff=0, 
multipart_instantiated starting=0 current=0 diff=0, 
multipart_upload_abort_under_path_invoked starting=0 current=0 diff=0, 
multipart_upload_aborted starting=0 current=0 diff=0, 
multipart_upload_completed starting=0 current=0 diff=0, 
multipart_upload_part_put startin
 g=0 current=0 diff=0, multipart_upload_part_put_bytes starting=0 current=0 
diff=0, multipart_upload_started starting=0 current=0 diff=0, 
object_bulk_delete_request starting=3 current=4 diff=1, 
object_continue_list_request starting=0 current=0 diff=0, object_copy_requests 
starting=0 current=0 diff=0, object_delete_objects starting=6 current=9 diff=3, 
object_delete_request starting=0 current=1 diff=1, object_list_request 
starting=5 current=6 diff=1, object_metadata_request starting=4 current=5 
diff=1, object_multipart_aborted starting=0 current=0 diff=0, 
object_multipart_initiated starting=0 current=0 diff=0, object_put_bytes 
starting=0 current=0 diff=0, object_put_request starting=3 current=4 diff=1, 
object_put_request_completed starting=3 current=4 diff=1, 
object_select_requests starting=0 current=0 diff=0, op_copy_from_local_file 
starting=0 current=0 diff=0, op_create starting=1 current=1 diff=0, 
op_create_non_recursive starting=0 current=0 diff=0, op_delete starting=0 
current=1 di
 ff=1, op_exists starting=0 current=0 diff=0, op_get_delegation_token 
starting=0 current=0 diff=0, op_get_file_checksum starting=0 current=0 diff=0, 
op_get_file_status starting=2 current=2 diff=0, op_glob_status starting=0 
current=0 diff=0, op_is_directory starting=0 current=0 diff=0, op_is_file 
starting=0 current=0 diff=0, op_list_files starting=0 current=0 diff=0, 
op_list_located_status starting=0 current=0 diff=0, op_list_status starting=0 
current=0 diff=0, op_mkdirs starting=2 current=2 diff=0, op_open starting=0 
current=0 diff=0, op_rename starting=0 current=0 diff=0, 
s3guard_metadatastore_authoritative_directories_updated starting=0 current=0 
diff=0, s3guard_metadatastore_initialization starting=0 current=0 diff=0, 
s3guard_metadatastore_put_path_request starting=0 current=0 diff=0, 
s3guard_metadatastore_record_deletes starting=0 current=0 diff=0, 
s3guard_metadatastore_record_reads starting=0 current=0 diff=0, 
s3guard_metadatastore_record_writes starting=0 current=0 diff=0, s3gu
 ard_metadatastore_retry starting=0 current=0 diff=0, 
s3guard_metadatastore_throttled starting=0 current=0 diff=0, store_io_request 
starting=0 current=0 diff=0, store_io_retry starting=0 current=0 diff=0, 
store_io_throttled starting=0 current=0 diff=0, stream_aborted starting=0 
current=0 diff=0, stream_read_bytes starting=0 current=0 diff=0, 
stream_read_bytes_backwards_on_seek starting=0 current=0 diff=0, 
stream_read_bytes_discarded_in_abort starting=0 current=0 diff=0, 
stream_read_bytes_discarded_in_close starting=0 current=0 diff=0, 
stream_read_close_operations starting=0 current=0 diff=0, stream_read_closed 
starting=0 current=0 diff=0, stream_read_exceptions starting=0 current=0 
d

[GitHub] [hadoop] bgaborg commented on pull request #2593: HADOOP-17454. [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread GitBox


bgaborg commented on pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593#issuecomment-754598161


   Testing: I had one failing test during the CLI run, but it was ok after a 
rerun:
   ```
   [ERROR]   
ITestS3ADeleteCost.testDeleteSingleFileInDir:110->AbstractS3ACostTest.verifyMetrics:360->Assert.assertEquals:645->Assert.failNotEquals:834->Assert.fail:88
 operation returning after fs.delete(simpleFile) action_executor_acquired 
starting=0 current=0 diff=0, action_http_get_request starting=0 current=0 
diff=0, action_http_head_request starting=4 current=5 diff=1, 
committer_bytes_committed starting=0 current=0 diff=0, committer_bytes_uploaded 
starting=0 current=0 diff=0, committer_commit_job starting=0 current=0 diff=0, 
committer_commits.failures starting=0 current=0 diff=0, 
committer_commits_aborted starting=0 current=0 diff=0, 
committer_commits_completed starting=0 current=0 diff=0, 
committer_commits_created starting=0 current=0 diff=0, 
committer_commits_reverted starting=0 current=0 diff=0, 
committer_jobs_completed starting=0 current=0 diff=0, committer_jobs_failed 
starting=0 current=0 diff=0, committer_magic_files_created starting=0 current=0 
diff=0, committer_materiali
 ze_file starting=0 current=0 diff=0, committer_stage_file_upload starting=0 
current=0 diff=0, committer_tasks_completed starting=0 current=0 diff=0, 
committer_tasks_failed starting=0 current=0 diff=0, delegation_token_issued 
starting=0 current=0 diff=0, directories_created starting=2 current=3 diff=1, 
directories_deleted starting=0 current=0 diff=0, fake_directories_created 
starting=0 current=0 diff=0, fake_directories_deleted starting=6 current=8 
diff=2, files_copied starting=0 current=0 diff=0, files_copied_bytes starting=0 
current=0 diff=0, files_created starting=1 current=1 diff=0, 
files_delete_rejected starting=0 current=0 diff=0, files_deleted starting=0 
current=1 diff=1, ignored_errors starting=0 current=0 diff=0, 
multipart_instantiated starting=0 current=0 diff=0, 
multipart_upload_abort_under_path_invoked starting=0 current=0 diff=0, 
multipart_upload_aborted starting=0 current=0 diff=0, 
multipart_upload_completed starting=0 current=0 diff=0, 
multipart_upload_part_put startin
 g=0 current=0 diff=0, multipart_upload_part_put_bytes starting=0 current=0 
diff=0, multipart_upload_started starting=0 current=0 diff=0, 
object_bulk_delete_request starting=3 current=4 diff=1, 
object_continue_list_request starting=0 current=0 diff=0, object_copy_requests 
starting=0 current=0 diff=0, object_delete_objects starting=6 current=9 diff=3, 
object_delete_request starting=0 current=1 diff=1, object_list_request 
starting=5 current=6 diff=1, object_metadata_request starting=4 current=5 
diff=1, object_multipart_aborted starting=0 current=0 diff=0, 
object_multipart_initiated starting=0 current=0 diff=0, object_put_bytes 
starting=0 current=0 diff=0, object_put_request starting=3 current=4 diff=1, 
object_put_request_completed starting=3 current=4 diff=1, 
object_select_requests starting=0 current=0 diff=0, op_copy_from_local_file 
starting=0 current=0 diff=0, op_create starting=1 current=1 diff=0, 
op_create_non_recursive starting=0 current=0 diff=0, op_delete starting=0 
current=1 di
 ff=1, op_exists starting=0 current=0 diff=0, op_get_delegation_token 
starting=0 current=0 diff=0, op_get_file_checksum starting=0 current=0 diff=0, 
op_get_file_status starting=2 current=2 diff=0, op_glob_status starting=0 
current=0 diff=0, op_is_directory starting=0 current=0 diff=0, op_is_file 
starting=0 current=0 diff=0, op_list_files starting=0 current=0 diff=0, 
op_list_located_status starting=0 current=0 diff=0, op_list_status starting=0 
current=0 diff=0, op_mkdirs starting=2 current=2 diff=0, op_open starting=0 
current=0 diff=0, op_rename starting=0 current=0 diff=0, 
s3guard_metadatastore_authoritative_directories_updated starting=0 current=0 
diff=0, s3guard_metadatastore_initialization starting=0 current=0 diff=0, 
s3guard_metadatastore_put_path_request starting=0 current=0 diff=0, 
s3guard_metadatastore_record_deletes starting=0 current=0 diff=0, 
s3guard_metadatastore_record_reads starting=0 current=0 diff=0, 
s3guard_metadatastore_record_writes starting=0 current=0 diff=0, s3gu
 ard_metadatastore_retry starting=0 current=0 diff=0, 
s3guard_metadatastore_throttled starting=0 current=0 diff=0, store_io_request 
starting=0 current=0 diff=0, store_io_retry starting=0 current=0 diff=0, 
store_io_throttled starting=0 current=0 diff=0, stream_aborted starting=0 
current=0 diff=0, stream_read_bytes starting=0 current=0 diff=0, 
stream_read_bytes_backwards_on_seek starting=0 current=0 diff=0, 
stream_read_bytes_discarded_in_abort starting=0 current=0 diff=0, 
stream_read_bytes_discarded_in_close starting=0 current=0 diff=0, 
stream_read_close_operations starting=0 current=0 diff=0, stream_read_closed 
starting=0 current=0 diff=0, stream_read_exceptions starting=0 current=0 
diff=0, stream_read_fully_operations st

[jira] [Commented] (HADOOP-16995) ITestS3AConfiguration proxy tests fail when bucket probes == 0

2021-01-05 Thread Gabor Bota (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17258861#comment-17258861
 ] 

Gabor Bota commented on HADOOP-16995:
-

https://github.com/apache/hadoop/pull/2593

> ITestS3AConfiguration proxy tests fail when bucket probes == 0
> --
>
> Key: HADOOP-16995
> URL: https://issues.apache.org/jira/browse/HADOOP-16995
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> when bucket probes are disabled, proxy config tests in ITestS3AConfiguration 
> fail because the probes aren't being attempted in initialize()
> {code}
>   
> fs.s3a.bucket.probe
> 0
>  
> {code}
> Cause: HADOOP-16711
> Fix: call unsetBaseAndBucketOverrides for bucket probe in test conf, then set 
> the probe value to 2 just to be resilient to future default changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread Gabor Bota (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17258860#comment-17258860
 ] 

Gabor Bota commented on HADOOP-17454:
-

https://github.com/apache/hadoop/pull/2593

> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Set the value of fs.s3a.bucket.probe to 0 by default.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17454:

Labels: pull-request-available  (was: )

> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Set the value of fs.s3a.bucket.probe to 0 by default.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bgaborg opened a new pull request #2593: HADOOP-17454. [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread GitBox


bgaborg opened a new pull request #2593:
URL: https://github.com/apache/hadoop/pull/2593


   Also fixes HADOOP-16995. ITestS3AConfiguration proxy tests failures when 
bucket probes == 0
   The improvement should include the fix, ebcause the test would fail by 
default otherwise.
   
   Change-Id: I9a7e4b5e6d4391ebba096c15e84461c038a2ec59
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2564: YARN-10538: Add RECOMMISSIONING nodes to the list of updated nodes returned to the AM

2021-01-05 Thread GitBox


hadoop-yetus commented on pull request #2564:
URL: https://github.com/apache/hadoop/pull/2564#issuecomment-754595737


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 51s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 49s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 45s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  89m 19s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2564/4/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 170m 10s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2564/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2564 |
   | JIRA Issue | YARN-10538 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7af9e48c7ab7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2b4febcf576 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2564/4/testReport/ |
   | Max. process+thread count | 881 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console output | 
https://ci-hadoop.apache.org/job/had

[jira] [Updated] (HADOOP-16995) ITestS3AConfiguration proxy tests fail when bucket probes == 0

2021-01-05 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-16995:

Priority: Major  (was: Minor)

> ITestS3AConfiguration proxy tests fail when bucket probes == 0
> --
>
> Key: HADOOP-16995
> URL: https://issues.apache.org/jira/browse/HADOOP-16995
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
>
> when bucket probes are disabled, proxy config tests in ITestS3AConfiguration 
> fail because the probes aren't being attempted in initialize()
> {code}
>   
> fs.s3a.bucket.probe
> 0
>  
> {code}
> Cause: HADOOP-16711
> Fix: call unsetBaseAndBucketOverrides for bucket probe in test conf, then set 
> the probe value to 2 just to be resilient to future default changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17454 started by Gabor Bota.
---
> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> Set the value of fs.s3a.bucket.probe to 0 by default.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-17454:

Description: 
Set the value of fs.s3a.bucket.probe to 0 by default.
Bucket existence checks are done in the initialization phase of the 
S3AFileSystem. It's not required to run this check: the operation itself will 
fail if the bucket does not exist instead of the check.

Some points on why do we want to set this to 0:
* When it's set to 0, bucket existence checks won't be done during 
initialization thus making it faster.
* Avoid the additional one or two requests on the bucket root, so the user does 
not need rights to read or list that folder.

  was:
Set the value of fs.s3a.bucket.probe to 0 in the code-default.xml.
Bucket existence checks are done in the initialization phase of the 
S3AFileSystem. It's not required to run this check: the operation itself will 
fail if the bucket does not exist instead of the check.

Some points on why do we want to set this to 0:
* When it's set to 0, bucket existence checks won't be done during 
initialization thus making it faster.
* Avoid the additional one or two requests on the bucket root, so the user does 
not need rights to read or list that folder.


> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> Set the value of fs.s3a.bucket.probe to 0 by default.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16995) ITestS3AConfiguration proxy tests fail when bucket probes == 0

2021-01-05 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16995 started by Gabor Bota.
---
> ITestS3AConfiguration proxy tests fail when bucket probes == 0
> --
>
> Key: HADOOP-16995
> URL: https://issues.apache.org/jira/browse/HADOOP-16995
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> when bucket probes are disabled, proxy config tests in ITestS3AConfiguration 
> fail because the probes aren't being attempted in initialize()
> {code}
>   
> fs.s3a.bucket.probe
> 0
>  
> {code}
> Cause: HADOOP-16711
> Fix: call unsetBaseAndBucketOverrides for bucket probe in test conf, then set 
> the probe value to 2 just to be resilient to future default changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2592: YARN-10560. Upgrade node.js to 10.23.1 and yarn to 1.22.5 in Web UI v2.

2021-01-05 Thread GitBox


hadoop-yetus commented on pull request #2592:
URL: https://github.com/apache/hadoop/pull/2592#issuecomment-754564555


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   4m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   4m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   4m 20s |  |  the patch passed  |
   | +1 :green_heart: |  hadolint  |   0m  6s |  |  There were no new hadolint 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 18s |  |  The patch generated 0 new 
+ 104 unchanged - 132 fixed = 104 total (was 236)  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  19m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   4m 17s |  |  hadoop-yarn-ui in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 103m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2592/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2592 |
   | Optional Tests | dupname asflicense hadolint shellcheck shelldocs compile 
javac javadoc mvninstall mvnsite unit shadedclient xml |
   | uname | Linux bb1d974ce186 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2b4febcf576 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2592/1/testReport/ |
   | Max. process+thread count | 689 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2592/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 
hadolint=1.11.1-0-g0e692dd |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please 

[jira] [Assigned] (HADOOP-16995) ITestS3AConfiguration proxy tests fail when bucket probes == 0

2021-01-05 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-16995:
---

Assignee: Gabor Bota  (was: Mukund Thakur)

> ITestS3AConfiguration proxy tests fail when bucket probes == 0
> --
>
> Key: HADOOP-16995
> URL: https://issues.apache.org/jira/browse/HADOOP-16995
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> when bucket probes are disabled, proxy config tests in ITestS3AConfiguration 
> fail because the probes aren't being attempted in initialize()
> {code}
>   
> fs.s3a.bucket.probe
> 0
>  
> {code}
> Cause: HADOOP-16711
> Fix: call unsetBaseAndBucketOverrides for bucket probe in test conf, then set 
> the probe value to 2 just to be resilient to future default changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-17454:

Release Note: S3A bucket existence check is disabled (fs.s3a.bucket.probe 
is 0), so there will be no existence check on the bucket during the 
S3AFileSystem initialization. The first operation will fail if the bucket does 
not exist.

> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> Set the value of fs.s3a.bucket.probe to 0 in the code-default.xml.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread Gabor Bota (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17258817#comment-17258817
 ] 

Gabor Bota commented on HADOOP-17454:
-

cc [~ste...@apache.org]

> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> Set the value of fs.s3a.bucket.probe to 0 in the code-default.xml.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-17454:

Affects Version/s: 3.3.0

> [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0
> ---
>
> Key: HADOOP-17454
> URL: https://issues.apache.org/jira/browse/HADOOP-17454
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> Set the value of fs.s3a.bucket.probe to 0 in the code-default.xml.
> Bucket existence checks are done in the initialization phase of the 
> S3AFileSystem. It's not required to run this check: the operation itself will 
> fail if the bucket does not exist instead of the check.
> Some points on why do we want to set this to 0:
> * When it's set to 0, bucket existence checks won't be done during 
> initialization thus making it faster.
> * Avoid the additional one or two requests on the bucket root, so the user 
> does not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17454) [s3a] Disable bucket existence check - set fs.s3a.bucket.probe to 0

2021-01-05 Thread Gabor Bota (Jira)
Gabor Bota created HADOOP-17454:
---

 Summary: [s3a] Disable bucket existence check - set 
fs.s3a.bucket.probe to 0
 Key: HADOOP-17454
 URL: https://issues.apache.org/jira/browse/HADOOP-17454
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Gabor Bota
Assignee: Gabor Bota


Set the value of fs.s3a.bucket.probe to 0 in the code-default.xml.
Bucket existence checks are done in the initialization phase of the 
S3AFileSystem. It's not required to run this check: the operation itself will 
fail if the bucket does not exist instead of the check.

Some points on why do we want to set this to 0:
* When it's set to 0, bucket existence checks won't be done during 
initialization thus making it faster.
* Avoid the additional one or two requests on the bucket root, so the user does 
not need rights to read or list that folder.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2591: YARN-10561. Upgrade node.js to 10.23.1 and yarn to 1.22.5 in YARN application catalog webapp

2021-01-05 Thread GitBox


hadoop-yetus commented on pull request #2591:
URL: https://github.com/apache/hadoop/pull/2591#issuecomment-754553184


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  26m 17s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  49m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 47s |  |  
hadoop-yarn-applications-catalog-webapp in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  99m 45s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2591/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2591 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux 8622134ef49d 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2b4febcf576 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2591/1/testReport/ |
   | Max. process+thread count | 539 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2591/1/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



--

[jira] [Updated] (HADOOP-17452) Upgrade guice to 4.2.3

2021-01-05 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-17452:
-
Description: 
Upgrade guice to 4.2.3 to fix compatibility issue:
{noformat}
Exception in thread "main" java.lang.NoSuchMethodError: 
com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType;
» at 
com.google.inject.multibindings.Multibinder.collectionOfProvidersOf(Multibinder.java:202)
» at 
com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:283)
» at 
com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:258)
» at 
com.google.inject.multibindings.Multibinder.newRealSetBinder(Multibinder.java:178)
» at 
com.google.inject.multibindings.Multibinder.newSetBinder(Multibinder.java:150)
» at 
org.apache.druid.guice.LifecycleModule.getEagerBinder(LifecycleModule.java:115)
» at org.apache.druid.guice.LifecycleModule.configure(LifecycleModule.java:121)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
» at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
» at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at 
com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
» at 
com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
» at com.google.inject.Guice.createInjector(Guice.java:96)
» at com.google.inject.Guice.createInjector(Guice.java:73)
» at com.google.inject.Guice.createInjector(Guice.java:62)
» at 
org.apache.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:431)
» at org.apache.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:69)
» at org.apache.druid.cli.ServerRunnable.run(ServerRunnable.java:58)
» at org.apache.druid.cli.Main.main(Main.java:113)
{noformat}

  was:
Upgrade guice to 4.1.0 to fix compatibility issue:

{noformat}
Exception in thread "main" java.lang.NoSuchMethodError: 
com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType;
» at 
com.google.inject.multibindings.Multibinder.collectionOfProvidersOf(Multibinder.java:202)
» at 
com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:283)
» at 
com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:258)
» at 
com.google.inject.multibindings.Multibinder.newRealSetBinder(Multibinder.java:178)
» at 
com.google.inject.multibindings.Multibinder.newSetBinder(Multibinder.java:150)
» at 
org.apache.druid.guice.LifecycleModule.getEagerBinder(LifecycleModule.java:115)
» at org.apache.druid.guice.LifecycleModule.configure(LifecycleModule.java:121)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
» at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
» at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
» at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
» at com.google.inject.spi.Elements.getElements(Elements.java:110)
» at 
com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
» at 
com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
» at com.google.inject.Guice.createInjector(Guice.java:96)
» at com.google.inject.Guice.createInjector(Guice.java:73)
» at com.google.inject.Guice.createInjector(Guice.java:62)
» at 
org.apache.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:431)
» at org.apache.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:69)
» at org.apache.druid.cli.ServerRunnable.run(ServerRunnable.java:58)
» at org.apache.druid.cli.Main.main(Main.java:113)
{noformat}



> Upgrade guice to 4.2.3
> --
>
> Key: HADOOP-17452
> URL: https://issues.apache.org/jira/browse/HADOOP-17452
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuming Wang
>

[jira] [Updated] (HADOOP-17452) Upgrade guice to 4.2.3

2021-01-05 Thread Yuming Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuming Wang updated HADOOP-17452:
-
Summary: Upgrade guice to 4.2.3  (was: Upgrade guice to 4.1.0)

> Upgrade guice to 4.2.3
> --
>
> Key: HADOOP-17452
> URL: https://issues.apache.org/jira/browse/HADOOP-17452
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuming Wang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Upgrade guice to 4.1.0 to fix compatibility issue:
> {noformat}
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.google.inject.util.Types.collectionOf(Ljava/lang/reflect/Type;)Ljava/lang/reflect/ParameterizedType;
> » at 
> com.google.inject.multibindings.Multibinder.collectionOfProvidersOf(Multibinder.java:202)
> » at 
> com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:283)
> » at 
> com.google.inject.multibindings.Multibinder$RealMultibinder.(Multibinder.java:258)
> » at 
> com.google.inject.multibindings.Multibinder.newRealSetBinder(Multibinder.java:178)
> » at 
> com.google.inject.multibindings.Multibinder.newSetBinder(Multibinder.java:150)
> » at 
> org.apache.druid.guice.LifecycleModule.getEagerBinder(LifecycleModule.java:115)
> » at 
> org.apache.druid.guice.LifecycleModule.configure(LifecycleModule.java:121)
> » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
> » at com.google.inject.spi.Elements.getElements(Elements.java:110)
> » at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
> » at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
> » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
> » at com.google.inject.spi.Elements.getElements(Elements.java:110)
> » at com.google.inject.util.Modules$OverrideModule.configure(Modules.java:177)
> » at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
> » at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
> » at com.google.inject.spi.Elements.getElements(Elements.java:110)
> » at 
> com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
> » at 
> com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
> » at com.google.inject.Guice.createInjector(Guice.java:96)
> » at com.google.inject.Guice.createInjector(Guice.java:73)
> » at com.google.inject.Guice.createInjector(Guice.java:62)
> » at 
> org.apache.druid.initialization.Initialization.makeInjectorWithModules(Initialization.java:431)
> » at org.apache.druid.cli.GuiceRunnable.makeInjector(GuiceRunnable.java:69)
> » at org.apache.druid.cli.ServerRunnable.run(ServerRunnable.java:58)
> » at org.apache.druid.cli.Main.main(Main.java:113)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2592: YARN-10560. Upgrade node.js to 10.23.1 and yarn to 1.22.5 in Web UI v2.

2021-01-05 Thread GitBox


hadoop-yetus commented on pull request #2592:
URL: https://github.com/apache/hadoop/pull/2592#issuecomment-754511076


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2592/1/console in 
case of problems.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #2592: YARN-10560. Upgrade node.js to 10.23.1 and yarn to 1.22.5 in Web UI v2.

2021-01-05 Thread GitBox


aajisaka opened a new pull request #2592:
URL: https://github.com/apache/hadoop/pull/2592


   JIRA: https://issues.apache.org/jira/browse/YARN-10560



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #2591: YARN-10561. Upgrade node.js to 10.23.1 in YARN application catalog webapp

2021-01-05 Thread GitBox


aajisaka opened a new pull request #2591:
URL: https://github.com/apache/hadoop/pull/2591


   JIRA: https://issues.apache.org/jira/browse/YARN-10561
   
   Upgrade node.js and yarn to the latest versions.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org