[jira] [Work logged] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16080?focusedWorklogId=518826&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518826
 ]

ASF GitHub Bot logged work on HADOOP-16080:
---

Author: ASF GitHub Bot
Created on: 02/Dec/20 07:37
Start Date: 02/Dec/20 07:37
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #2510:
URL: https://github.com/apache/hadoop/pull/2510#issuecomment-737052172


   BTW, I think trunk and branch-3.3 can be updated as well in a separate JIRA 
to avoid the usage of guava as possible.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518826)
Time Spent: 40m  (was: 0.5h)

> hadoop-aws does not work with hadoop-client-api
> ---
>
> Key: HADOOP-16080
> URL: https://issues.apache.org/jira/browse/HADOOP-16080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Keith Turner
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I attempted to use Accumulo and S3a with the following jars on the classpath.
>  * hadoop-client-api-3.1.1.jar
>  * hadoop-client-runtime-3.1.1.jar
>  * hadoop-aws-3.1.1.jar
> This failed with the following exception.
> {noformat}
> Exception in thread "init" java.lang.NoSuchMethodError: 
> org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
> at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108)
> at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413)
> at 
> org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184)
> at 
> org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479)
> at 
> org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487)
> at 
> org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370)
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348)
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967)
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The problem is that {{S3AFileSystem.create()}} looks for 
> {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}}
>  which does not exist in hadoop-client-api-3.1.1.jar.  What does exist is 
> {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}.
> To work around this issue I created a version of hadoop-aws-3.1.1.jar that 
> relocated references to Guava.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2510: HADOOP-16080. hadoop-aws does not work with hadoop-client-api

2020-12-01 Thread GitBox


aajisaka commented on pull request #2510:
URL: https://github.com/apache/hadoop/pull/2510#issuecomment-737052172


   BTW, I think trunk and branch-3.3 can be updated as well in a separate JIRA 
to avoid the usage of guava as possible.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17288) Use shaded guava from thirdparty

2020-12-01 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17242100#comment-17242100
 ] 

Ayush Saxena commented on HADOOP-17288:
---

Thanx Everyone, Have put up a PR backporting to 3.3

Please have a look

[GitHub Pull Request #2505|https://github.com/apache/hadoop/pull/2505]

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17288) Use shaded guava from thirdparty

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17288?focusedWorklogId=518807&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518807
 ]

ASF GitHub Bot logged work on HADOOP-17288:
---

Author: ASF GitHub Bot
Created on: 02/Dec/20 07:04
Start Date: 02/Dec/20 07:04
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #2505:
URL: https://github.com/apache/hadoop/pull/2505#issuecomment-737037955


   The test results are at 
   
https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-2505/1/tests
   
   Not sure, why there is no comment from jenkins here



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518807)
Time Spent: 3h 50m  (was: 3h 40m)

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #2505: HADOOP-17288. Use shaded guava from thirdparty. (branch-3.3)

2020-12-01 Thread GitBox


ayushtkn commented on pull request #2505:
URL: https://github.com/apache/hadoop/pull/2505#issuecomment-737037955


   The test results are at 
   
https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-2505/1/tests
   
   Not sure, why there is no comment from jenkins here



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2511: HDFS-15704. Mitigate lease monitor's rapid infinite loop.

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2511:
URL: https://github.com/apache/hadoop/pull/2511#issuecomment-737036735


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  7s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  5s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  
hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with 
JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 0 new + 599 unchanged - 3 
fixed = 599 total (was 602)  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 0 new 
+ 583 unchanged - 3 fixed = 583 total (was 586)  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 25 unchanged - 1 
fixed = 25 total (was 26)  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  2s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 148m 29s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2511/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  asflicense  |   0m 48s | 
[/patch-asflicense-problems.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2511/1/artifact/out/patch-asflicense-problems.txt)
 |  The patch generated 41 ASF License warnings.  |
   |  |   | 238m 12s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
   |   | hadoop.hdfs.TestLeaseRecoveryStriped |
   |   | hadoop.hdfs.server.namenode.TestFSImage |
   |   | hadoop.hdfs.server.namenode.TestUpgradeDomainBlockPlacementPolicy |
   |   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
   |   | hadoop.hdfs.server.namenode.TestDefaultBlockPlacementPolicy |
   |   | hadoop.hdfs.TestSafeMode |
   |   | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits |
   |   | hadoop.hdfs.TestRollingUpgradeDowngrade |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.TestSetrepDecreasing |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
   |   | hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.namenode.TestAddBlock |
   |   | h

[GitHub] [hadoop] hadoop-yetus commented on pull request #2494: YARN-10380: Import logic of multi-node allocation in CapacityScheduler

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2494:
URL: https://github.com/apache/hadoop/pull/2494#issuecomment-737012784


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 58s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 55s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 59s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  89m 24s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2494/6/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 180m  2s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoQueueCreation
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2494/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2494 |
   | JIRA Issue | YARN-10380 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0bf31d9a92e5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fa773a83265 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2494/6/testReport/ |
   | Max. process+thread count | 854 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console output | 
https://ci-ha

[jira] [Work logged] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16080?focusedWorklogId=518777&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518777
 ]

ASF GitHub Bot logged work on HADOOP-16080:
---

Author: ASF GitHub Bot
Created on: 02/Dec/20 05:58
Start Date: 02/Dec/20 05:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2510:
URL: https://github.com/apache/hadoop/pull/2510#issuecomment-737010726


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 51s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 56s |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  15m 19s |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 40s |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  19m 37s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  branch-3.2 passed  |
   | +0 :ok: |  spotbugs  |   0m 56s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 32s |  branch-3.2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 18s |  the patch passed  |
   | +1 :green_heart: |  javac  |  15m 18s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   3m 27s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 47s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 43s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 27s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   5m  5s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m 14s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  unit  |   5m  5s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 35s |  hadoop-aliyun in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m 26s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 141m 26s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.security.TestFixKerberosTicketOrder |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2510 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 23f531e1c395 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / 625f85f |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~16.04-b01 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/2/testReport/ |
   | Max. process+thread count | 1509 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
hadoop-tools/hadoop-aliyun U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518

[GitHub] [hadoop] hadoop-yetus commented on pull request #2510: HADOOP-16080. hadoop-aws does not work with hadoop-client-api

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2510:
URL: https://github.com/apache/hadoop/pull/2510#issuecomment-737010726


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 35s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 51s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 56s |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  15m 19s |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 40s |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  19m 37s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  branch-3.2 passed  |
   | +0 :ok: |  spotbugs  |   0m 56s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 32s |  branch-3.2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 46s |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 18s |  the patch passed  |
   | +1 :green_heart: |  javac  |  15m 18s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   3m 27s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 47s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 43s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 27s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   5m  5s |  the patch passed  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |  10m 14s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  unit  |   5m  5s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 35s |  hadoop-aliyun in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m 26s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 141m 26s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.security.TestFixKerberosTicketOrder |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2510 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 23f531e1c395 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / 625f85f |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~16.04-b01 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/2/testReport/ |
   | Max. process+thread count | 1509 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
hadoop-tools/hadoop-aliyun U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2490: YARN-10499. TestRouterWebServiceREST fails

2020-12-01 Thread GitBox


aajisaka commented on pull request #2490:
URL: https://github.com/apache/hadoop/pull/2490#issuecomment-737008693


   @pbacsko Would you review this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17338) Intermittent S3AInputStream failures: Premature end of Content-Length delimited message body etc

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17338?focusedWorklogId=518765&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518765
 ]

ASF GitHub Bot logged work on HADOOP-17338:
---

Author: ASF GitHub Bot
Created on: 02/Dec/20 04:44
Start Date: 02/Dec/20 04:44
Worklog Time Spent: 10m 
  Work Description: yzhangal commented on pull request #2455:
URL: https://github.com/apache/hadoop/pull/2455#issuecomment-736986389


   Many thanks Steve. I see the challenge of supporting the tests at upstream.
   
   I reran the hadoop-tools/hadoop-aws tests with command "mvn verify -Dscale 
-DtestsThreadCount=8 " and only got the following (some of the earlier failures 
I saw did not show):
   
   [ERROR] Errors:
   [ERROR]   
ITestS3AContractRootDir.testListEmptyRootDirectory:82->AbstractContractRootDirectoryTest.testListEmptyRootDirectory:196
 » TestTimedOut
   [ERROR]   
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRecursiveRootListing:265
 » TestTimedOut
   [ERROR]   
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive:101
 » TestTimedOut
   [ERROR]   ITestS3ATemporaryCredentials.testSTS:133 » AccessDenied : request 
session cred...
   [INFO]
   [ERROR] Tests run: 260, Failures: 0, Errors: 4, Skipped: 45
   
   Wonder if the failed tests work for you.
   
   Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518765)
Time Spent: 2h 20m  (was: 2h 10m)

> Intermittent S3AInputStream failures: Premature end of Content-Length 
> delimited message body etc
> 
>
> Key: HADOOP-17338
> URL: https://issues.apache.org/jira/browse/HADOOP-17338
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17338.001.patch
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> We are seeing the following two kinds of intermittent exceptions when using 
> S3AInputSteam:
> 1.
> {code:java}
> Caused by: com.amazonaws.thirdparty.apache.http.ConnectionClosedException: 
> Premature end of Content-Length delimited message body (expected: 156463674; 
> received: 150001089
> at 
> com.amazonaws.thirdparty.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
> at 
> com.amazonaws.thirdparty.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:82)
> at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:181)
> at java.io.DataInputStream.readFully(DataInputStream.java:195)
> at java.io.DataInputStream.readFully(DataInputStream.java:169)
> at 
> org.apache.parquet.hadoop.ParquetFileReader$ConsecutiveChunkList.readAll(ParquetFileReader.java:779)
> at 
> org.apache.parquet.hadoop.ParquetFileReader.readNextRowGroup(ParquetFileReader.java:511)
> at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:130)
> at 
> org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:214)
> at 
> org.apache.parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:227)
> at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(ParquetRecordReaderWrapper.java:208)
> at 
> org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.next(Pa

[GitHub] [hadoop] yzhangal commented on pull request #2455: HADOOP-17338. Intermittent S3AInputStream failures: Premature end of …

2020-12-01 Thread GitBox


yzhangal commented on pull request #2455:
URL: https://github.com/apache/hadoop/pull/2455#issuecomment-736986389


   Many thanks Steve. I see the challenge of supporting the tests at upstream.
   
   I reran the hadoop-tools/hadoop-aws tests with command "mvn verify -Dscale 
-DtestsThreadCount=8 " and only got the following (some of the earlier failures 
I saw did not show):
   
   [ERROR] Errors:
   [ERROR]   
ITestS3AContractRootDir.testListEmptyRootDirectory:82->AbstractContractRootDirectoryTest.testListEmptyRootDirectory:196
 » TestTimedOut
   [ERROR]   
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRecursiveRootListing:265
 » TestTimedOut
   [ERROR]   
ITestS3AContractRootDir>AbstractContractRootDirectoryTest.testRmEmptyRootDirNonRecursive:101
 » TestTimedOut
   [ERROR]   ITestS3ATemporaryCredentials.testSTS:133 » AccessDenied : request 
session cred...
   [INFO]
   [ERROR] Tests run: 260, Failures: 0, Errors: 4, Skipped: 45
   
   Wonder if the failed tests work for you.
   
   Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17405) Upgrade Yetus to 0.13.0

2020-12-01 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17405:
--

 Summary: Upgrade Yetus to 0.13.0
 Key: HADOOP-17405
 URL: https://issues.apache.org/jira/browse/HADOOP-17405
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka
Assignee: Akira Ajisaka


After HADOOP-17262 and HADOOP-17297, Hadoop is using a non-release version of 
Apache Yetus. It should be upgraded to 0.13.0 when released.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2501: HDFS-15648. TestFileChecksum should be parameterized.

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2501:
URL: https://github.com/apache/hadoop/pull/2501#issuecomment-736974344


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m  3s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   4m  1s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 58s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | -1 :x: |  shadedclient  |  24m 26s |  |  patch has errors when building 
and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 50s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 156m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2501/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 266m 53s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStream |
   |   | hadoop.hdfs.TestAppendSnapshotTruncate |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
   |   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
   |   | hadoop.hdfs.TestBlocksScheduledCounter |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStream |
   |   | hadoop.hdfs.TestStateAlignmentContextWithHA |
   |   | hadoop.hdfs.TestDFSStripedInputStream |
   |   | hadoop.hdfs.client.impl.TestBlockReaderLocal |
   |   | hadoop.hdfs.TestFileAppend3 |
   |   | hadoop.hdfs.TestBlockStoragePolicy |
   |   | hadoop.hdfs.TestDFSFinalize |
   |   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
   |   | hadoop.hdfs.TestErasureCodingPolicies |
   |   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
   |   | hadoop.hdfs.TestErasureCodingMultipleRacks |
   |   | hadoop.hdfs.TestLeaseRecovery |
   |   | hadoop.hdfs.TestDFSClientSocketSize |
   |   | hadoop.hdfs.TestHDFSServerPorts |
   |   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.TestDFSOutputStream |
   |   | hadoop.hdfs.TestDistributedFileSystemWithECFile |
   |   | hadoop.hdfs.TestLeaseRecovery2 |
   |   | hadoop.hdfs.TestMaintenanceState |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2501/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2501 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite un

[GitHub] [hadoop] hadoop-yetus commented on pull request #2483: HDFS-14904. Option to let Balancer prefer top used nodes in each iteration.

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2483:
URL: https://github.com/apache/hadoop/pull/2483#issuecomment-736972046


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 14s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 12s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 55s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  21m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 13s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 133m 39s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2483/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 235m 57s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover |
   |   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2483/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2483 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6fa24d299f87 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / fa773a83265 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2483/4/testReport/ |
   | Max. process+thread count | 2754 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2483/4/console |

[jira] [Work logged] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16080?focusedWorklogId=518753&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518753
 ]

ASF GitHub Bot logged work on HADOOP-16080:
---

Author: ASF GitHub Bot
Created on: 02/Dec/20 03:45
Start Date: 02/Dec/20 03:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2510:
URL: https://github.com/apache/hadoop/pull/2510#issuecomment-736971017


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  16m  8s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 52s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 54s |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  15m 31s |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 39s |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   2m 23s |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  19m 23s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  branch-3.2 passed  |
   | +0 :ok: |  spotbugs  |   0m 57s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 39s |  branch-3.2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 30s |  the patch passed  |
   | +1 :green_heart: |  javac  |  14m 30s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 36s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 28s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 14s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   4m 44s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 26s |  hadoop-aliyun in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 44s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 147m 22s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2510 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4312a09ef128 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / 625f85f |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~16.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/1/testReport/ |
   | Max. process+thread count | 1389 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
hadoop-tools/hadoop-aliyun U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518753)
Time Spent: 20m  (was: 10m)

> hadoop-aws does not work with hadoop-client-api
> ---
>
> Key: HADOOP-16080
> URL: https://issues.apache.org/jira/browse/HADOOP-16080
>  

[GitHub] [hadoop] hadoop-yetus commented on pull request #2510: HADOOP-16080. hadoop-aws does not work with hadoop-client-api

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2510:
URL: https://github.com/apache/hadoop/pull/2510#issuecomment-736971017


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  16m  8s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ branch-3.2 Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 52s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 54s |  branch-3.2 passed  |
   | +1 :green_heart: |  compile  |  15m 31s |  branch-3.2 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 39s |  branch-3.2 passed  |
   | +1 :green_heart: |  mvnsite  |   2m 23s |  branch-3.2 passed  |
   | +1 :green_heart: |  shadedclient  |  19m 23s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  1s |  branch-3.2 passed  |
   | +0 :ok: |  spotbugs  |   0m 57s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 39s |  branch-3.2 passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 19s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 45s |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 30s |  the patch passed  |
   | +1 :green_heart: |  javac  |  14m 30s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 36s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 28s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 59s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   3m 58s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 14s |  hadoop-common in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   4m 44s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 26s |  hadoop-aliyun in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 44s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 147m 22s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2510 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4312a09ef128 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.2 / 625f85f |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~16.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/1/testReport/ |
   | Max. process+thread count | 1389 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
hadoop-tools/hadoop-aliyun U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2510/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] qizhu-lucas commented on pull request #2494: YARN-10380: Import logic of multi-node allocation in CapacityScheduler

2020-12-01 Thread GitBox


qizhu-lucas commented on pull request #2494:
URL: https://github.com/apache/hadoop/pull/2494#issuecomment-736963575


   Thanks @jiwq  for review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] qizhu-lucas commented on a change in pull request #2494: YARN-10380: Import logic of multi-node allocation in CapacityScheduler

2020-12-01 Thread GitBox


qizhu-lucas commented on a change in pull request #2494:
URL: https://github.com/apache/hadoop/pull/2494#discussion_r533865871



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
##
@@ -544,44 +544,73 @@ static void schedule(CapacityScheduler cs) throws 
InterruptedException{
 if(nodeSize == 0) {
   return;
 }
-int start = random.nextInt(nodeSize);
+if (!cs.multiNodePlacementEnabled) {
+  // First randomize the start point
+  int current = 0;
+  int start = random.nextInt(nodeSize);
 
-// To avoid too verbose DEBUG logging, only print debug log once for
-// every 10 secs.
-boolean printSkipedNodeLogging = false;
-if (Time.monotonicNow() / 1000 % 10 == 0) {
-  printSkipedNodeLogging = (!printedVerboseLoggingForAsyncScheduling);
-} else {
-  printedVerboseLoggingForAsyncScheduling = false;
-}
+  // To avoid too verbose DEBUG logging, only print debug log once for
+  // every 10 secs.
+  boolean printSkipedNodeLogging = false;
+  if (Time.monotonicNow() / 1000 % 10 == 0) {
+printSkipedNodeLogging = (!printedVerboseLoggingForAsyncScheduling);
+  } else {
+printedVerboseLoggingForAsyncScheduling = false;
+  }
+
+  // Allocate containers of node [start, end)
+  for (FiCaSchedulerNode node : nodes) {
+if (current++ >= start) {
+  if (shouldSkipNodeSchedule(node, cs, printSkipedNodeLogging)) {
+continue;
+  }
+  cs.allocateContainersToNode(node.getNodeID(), false);
+}
+  }
 
-// Allocate containers of node [start, end)
-for (FiCaSchedulerNode node : nodes) {
-  if (current++ >= start) {
+  current = 0;
+
+  // Allocate containers of node [0, start)
+  for (FiCaSchedulerNode node : nodes) {
+if (current++ > start) {
+  break;
+}
 if (shouldSkipNodeSchedule(node, cs, printSkipedNodeLogging)) {
   continue;
 }
 cs.allocateContainersToNode(node.getNodeID(), false);
   }
-}
-
-current = 0;
 
-// Allocate containers of node [0, start)
-for (FiCaSchedulerNode node : nodes) {
-  if (current++ > start) {
-break;
+  if (printSkipedNodeLogging) {
+printedVerboseLoggingForAsyncScheduling = true;
   }
-  if (shouldSkipNodeSchedule(node, cs, printSkipedNodeLogging)) {
-continue;
+} else {
+  //Get all partitions

Review comment:
   Fixed it.

##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
##
@@ -531,11 +531,11 @@ private static boolean 
shouldSkipNodeSchedule(FiCaSchedulerNode node,
 
   /**
* Schedule on all nodes by starting at a random point.
+   * Schedule on all partitions by starting at a random partition
+   * when multiNodePlacementEnabled is true.
* @param cs
*/
   static void schedule(CapacityScheduler cs) throws InterruptedException{
-// First randomize the start point
-int current = 0;

Review comment:
   Fixed it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] qizhu-lucas commented on a change in pull request #2494: YARN-10380: Import logic of multi-node allocation in CapacityScheduler

2020-12-01 Thread GitBox


qizhu-lucas commented on a change in pull request #2494:
URL: https://github.com/apache/hadoop/pull/2494#discussion_r533865786



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
##
@@ -544,44 +544,73 @@ static void schedule(CapacityScheduler cs) throws 
InterruptedException{
 if(nodeSize == 0) {
   return;
 }
-int start = random.nextInt(nodeSize);
+if (!cs.multiNodePlacementEnabled) {
+  // First randomize the start point
+  int current = 0;
+  int start = random.nextInt(nodeSize);
 
-// To avoid too verbose DEBUG logging, only print debug log once for
-// every 10 secs.
-boolean printSkipedNodeLogging = false;
-if (Time.monotonicNow() / 1000 % 10 == 0) {
-  printSkipedNodeLogging = (!printedVerboseLoggingForAsyncScheduling);
-} else {
-  printedVerboseLoggingForAsyncScheduling = false;
-}
+  // To avoid too verbose DEBUG logging, only print debug log once for
+  // every 10 secs.
+  boolean printSkipedNodeLogging = false;
+  if (Time.monotonicNow() / 1000 % 10 == 0) {
+printSkipedNodeLogging = (!printedVerboseLoggingForAsyncScheduling);
+  } else {
+printedVerboseLoggingForAsyncScheduling = false;
+  }
+
+  // Allocate containers of node [start, end)
+  for (FiCaSchedulerNode node : nodes) {
+if (current++ >= start) {
+  if (shouldSkipNodeSchedule(node, cs, printSkipedNodeLogging)) {
+continue;
+  }
+  cs.allocateContainersToNode(node.getNodeID(), false);
+}
+  }
 
-// Allocate containers of node [start, end)
-for (FiCaSchedulerNode node : nodes) {
-  if (current++ >= start) {
+  current = 0;
+
+  // Allocate containers of node [0, start)
+  for (FiCaSchedulerNode node : nodes) {
+if (current++ > start) {
+  break;
+}
 if (shouldSkipNodeSchedule(node, cs, printSkipedNodeLogging)) {
   continue;
 }
 cs.allocateContainersToNode(node.getNodeID(), false);
   }
-}
-
-current = 0;
 
-// Allocate containers of node [0, start)
-for (FiCaSchedulerNode node : nodes) {
-  if (current++ > start) {
-break;
+  if (printSkipedNodeLogging) {
+printedVerboseLoggingForAsyncScheduling = true;
   }
-  if (shouldSkipNodeSchedule(node, cs, printSkipedNodeLogging)) {
-continue;
+} else {
+  //Get all partitions
+  List partitions = cs.nodeTracker.getPartitions();
+  int partitionSize = partitions.size();
+  // First randomize the start point
+  int start = random.nextInt(partitionSize);
+  int current = 0;
+  // Allocate containers of partition [start, end)
+  for (String partititon : partitions) {
+if (current++ >= start) {
+  cs.allocateContainersToNode(cs.getCandidateNodeSet(partititon),

Review comment:
   Fixed it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein opened a new pull request #2511: HDFS-15704. Mitigate lease monitor's rapid infinite loop.

2020-12-01 Thread GitBox


amahussein opened a new pull request #2511:
URL: https://github.com/apache/hadoop/pull/2511


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-16080:

Labels: pull-request-available  (was: )

> hadoop-aws does not work with hadoop-client-api
> ---
>
> Key: HADOOP-16080
> URL: https://issues.apache.org/jira/browse/HADOOP-16080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Keith Turner
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I attempted to use Accumulo and S3a with the following jars on the classpath.
>  * hadoop-client-api-3.1.1.jar
>  * hadoop-client-runtime-3.1.1.jar
>  * hadoop-aws-3.1.1.jar
> This failed with the following exception.
> {noformat}
> Exception in thread "init" java.lang.NoSuchMethodError: 
> org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
> at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108)
> at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413)
> at 
> org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184)
> at 
> org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479)
> at 
> org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487)
> at 
> org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370)
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348)
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967)
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The problem is that {{S3AFileSystem.create()}} looks for 
> {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}}
>  which does not exist in hadoop-client-api-3.1.1.jar.  What does exist is 
> {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}.
> To work around this issue I created a version of hadoop-aws-3.1.1.jar that 
> relocated references to Guava.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16080?focusedWorklogId=518716&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518716
 ]

ASF GitHub Bot logged work on HADOOP-16080:
---

Author: ASF GitHub Bot
Created on: 02/Dec/20 01:17
Start Date: 02/Dec/20 01:17
Worklog Time Spent: 10m 
  Work Description: sunchao opened a new pull request #2510:
URL: https://github.com/apache/hadoop/pull/2510


   This removes `ListenableFuture` as well as `ListeningExecutorService` from 
`SemaphoredDelegatingExecutor`, so that `hadoop-aws` as well as `hadoop-aliyun` 
can consume it from `hadoop-client-api`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518716)
Remaining Estimate: 0h
Time Spent: 10m

> hadoop-aws does not work with hadoop-client-api
> ---
>
> Key: HADOOP-16080
> URL: https://issues.apache.org/jira/browse/HADOOP-16080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Keith Turner
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I attempted to use Accumulo and S3a with the following jars on the classpath.
>  * hadoop-client-api-3.1.1.jar
>  * hadoop-client-runtime-3.1.1.jar
>  * hadoop-aws-3.1.1.jar
> This failed with the following exception.
> {noformat}
> Exception in thread "init" java.lang.NoSuchMethodError: 
> org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
> at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108)
> at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413)
> at 
> org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184)
> at 
> org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479)
> at 
> org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487)
> at 
> org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370)
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348)
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967)
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The problem is that {{S3AFileSystem.create()}} looks for 
> {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}}
>  which does not exist in hadoop-client-api-3.1.1.jar.  What does exist is 
> {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}.
> To work around this issue I created a version of hadoop-aws-3.1.1.jar that 
> relocated references to Guava.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao opened a new pull request #2510: HADOOP-16080. hadoop-aws does not work with hadoop-client-api

2020-12-01 Thread GitBox


sunchao opened a new pull request #2510:
URL: https://github.com/apache/hadoop/pull/2510


   This removes `ListenableFuture` as well as `ListeningExecutorService` from 
`SemaphoredDelegatingExecutor`, so that `hadoop-aws` as well as `hadoop-aliyun` 
can consume it from `hadoop-client-api`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on a change in pull request #2501: HDFS-15648. TestFileChecksum should be parameterized.

2020-12-01 Thread GitBox


iwasakims commented on a change in pull request #2501:
URL: https://github.com/apache/hadoop/pull/2501#discussion_r533793171



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileChecksum.java
##
@@ -77,6 +81,31 @@
   private String stripedFile2 = ecDir + "/stripedFileChecksum2";
   private String replicatedFile = "/replicatedFileChecksum";
 
+  private String checksumCombineMode;
+  private boolean expectComparableStripedAndReplicatedFiles;
+  private boolean expectComparableDifferentBlockSizeReplicatedFiles;

Review comment:
   I removed boolean params and updated issue/PR title.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17402) Add GCS FS impl reference to core-default.xml

2020-12-01 Thread Rafal Wojdyla (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17241935#comment-17241935
 ] 

Rafal Wojdyla edited comment on HADOOP-17402 at 12/1/20, 11:22 PM:
---

[~ste...@apache.org] thanks for the links. I'm with you on the long term 
vision. In the meantime tho, is there something we can do to bring GCS 
connector on par with S3 (specifically the {{core-default}} config). I'm mostly 
thinking of pyspark users, for whom java ecosystem may be a puzzle. 
Spark/pyspark loads {{core-default}} from {{hadoop-common}}. Afaiu in pyspark 
context the auto service doesn't actually register the {{gs}} scheme (at least 
in the context where connector jar is loaded via `spark.jars`, which is 
likely), so Spark users are forced to add the config manually.

One might argue that adding the config to {{core-default}} would still result 
in missing class error, but at least it would look the same as S3, and it would 
save on extra config. What do you think?


was (Author: ravwojdyla):
[~ste...@apache.org] thanks for the links. I'm with you on the long term 
vision. In the meantime tho, is there something we can do to bring GCS 
connector on par with S3 (specifically the {{core-default}} config). I'm mostly 
thinking of pyspark users, for whom java ecosystem may be a puzzle. 
Spark/pyspark loads {{core-default}} from {{hadoop-common}}. Afaiu in pyspark 
context the auto service doesn't actually register the {{gs}} scheme, so Spark 
users are forced to add the config manually.

One might argue that adding the config to {{core-default}} would still result 
in missing class error, but at least it would look the same as S3, and it would 
save on extra config. What do you think?

> Add GCS FS impl reference to core-default.xml
> -
>
> Key: HADOOP-17402
> URL: https://issues.apache.org/jira/browse/HADOOP-17402
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Rafal Wojdyla
>Priority: Major
>
> Akin to current S3 default configuration add GCS configuration, specifically 
> to declare the GCS implementation. [GCS 
> connector|https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage].
>  Has this not been done since the GCS connector is not part of the hadoop/ASF 
> codebase, or is there any other blocker?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17402) Add GCS FS impl reference to core-default.xml

2020-12-01 Thread Rafal Wojdyla (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17241935#comment-17241935
 ] 

Rafal Wojdyla edited comment on HADOOP-17402 at 12/1/20, 11:22 PM:
---

[~ste...@apache.org] thanks for the links. I'm with you on the long term 
vision. In the meantime tho, is there something we can do to bring GCS 
connector on par with S3 (specifically the {{core-default}} config). I'm mostly 
thinking of pyspark users, for whom java ecosystem may be a puzzle. 
Spark/pyspark loads {{core-default}} from {{hadoop-common}}. Afaiu in pyspark 
context the auto service doesn't actually register the {{gs}} scheme (at least 
in the context where connector jar is loaded via {{spark.jars}}, which is 
likely), so Spark users are forced to add the config manually.

One might argue that adding the config to {{core-default}} would still result 
in missing class error, but at least it would look the same as S3, and it would 
save on extra config. What do you think?


was (Author: ravwojdyla):
[~ste...@apache.org] thanks for the links. I'm with you on the long term 
vision. In the meantime tho, is there something we can do to bring GCS 
connector on par with S3 (specifically the {{core-default}} config). I'm mostly 
thinking of pyspark users, for whom java ecosystem may be a puzzle. 
Spark/pyspark loads {{core-default}} from {{hadoop-common}}. Afaiu in pyspark 
context the auto service doesn't actually register the {{gs}} scheme (at least 
in the context where connector jar is loaded via `spark.jars`, which is 
likely), so Spark users are forced to add the config manually.

One might argue that adding the config to {{core-default}} would still result 
in missing class error, but at least it would look the same as S3, and it would 
save on extra config. What do you think?

> Add GCS FS impl reference to core-default.xml
> -
>
> Key: HADOOP-17402
> URL: https://issues.apache.org/jira/browse/HADOOP-17402
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Rafal Wojdyla
>Priority: Major
>
> Akin to current S3 default configuration add GCS configuration, specifically 
> to declare the GCS implementation. [GCS 
> connector|https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage].
>  Has this not been done since the GCS connector is not part of the hadoop/ASF 
> codebase, or is there any other blocker?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17402) Add GCS FS impl reference to core-default.xml

2020-12-01 Thread Rafal Wojdyla (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17241935#comment-17241935
 ] 

Rafal Wojdyla commented on HADOOP-17402:


[~ste...@apache.org] thanks for the links. I'm with you on the long term 
vision. In the meantime tho, is there something we can do to bring GCS 
connector on par with S3 (specifically the {{core-default}} config). I'm mostly 
thinking of pyspark users, for whom java ecosystem may be a puzzle. 
Spark/pyspark loads {{core-default}} from {{hadoop-common}}. Afaiu in pyspark 
context the auto service doesn't actually register the {{gs}} scheme, so Spark 
users are forced to add the config manually.

One might argue that adding the config to {{core-default}} would still result 
in missing class error, but at least it would look the same as S3, and it would 
save on extra config. What do you think?

> Add GCS FS impl reference to core-default.xml
> -
>
> Key: HADOOP-17402
> URL: https://issues.apache.org/jira/browse/HADOOP-17402
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Rafal Wojdyla
>Priority: Major
>
> Akin to current S3 default configuration add GCS configuration, specifically 
> to declare the GCS implementation. [GCS 
> connector|https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage].
>  Has this not been done since the GCS connector is not part of the hadoop/ASF 
> codebase, or is there any other blocker?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17402) Add GCS FS impl reference to core-default.xml

2020-12-01 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17402?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17241893#comment-17241893
 ] 

Steve Loughran commented on HADOOP-17402:
-

> GCS connector has hadoop.fs.Filesystem service declared like here, is this 
> the right way?

yes. Does force a load of the class, which may have side effects or even fail 
to load. it can also be very slow. Hence our interest in having some separate 
declaration

the S3A error isn't as useful as you think as if the hadoop-aws JAR is present 
but the aws-sdk JAR isn't you end up with different linkage problems. If you 
look at HADOOP-14132 we've discussed actually including in the service 
declaration a list of mandatory dependencies which we could look for and fail 
with meaningful messages.

Take a look at my cloudstore JAR to see what I envisage. 
https://github.com/steveloughran/cloudstore

> Add GCS FS impl reference to core-default.xml
> -
>
> Key: HADOOP-17402
> URL: https://issues.apache.org/jira/browse/HADOOP-17402
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Reporter: Rafal Wojdyla
>Priority: Major
>
> Akin to current S3 default configuration add GCS configuration, specifically 
> to declare the GCS implementation. [GCS 
> connector|https://cloud.google.com/dataproc/docs/concepts/connectors/cloud-storage].
>  Has this not been done since the GCS connector is not part of the hadoop/ASF 
> codebase, or is there any other blocker?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jbrennan333 merged pull request #2487: HDFS-15694. Avoid calling UpdateHeartBeatState inside DataNodeDescriptor

2020-12-01 Thread GitBox


jbrennan333 merged pull request #2487:
URL: https://github.com/apache/hadoop/pull/2487


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17404) ABFS: Piggyback flush on Append calls for short writes

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17404?focusedWorklogId=518644&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518644
 ]

ASF GitHub Bot logged work on HADOOP-17404:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 20:31
Start Date: 01/Dec/20 20:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509#issuecomment-736801677


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 58s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/diff-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2509/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 8 new + 7 unchanged - 0 
fixed = 15 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | -1 :x: |  findbugs  |   1m  0s | 
[/new-findbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2509/1/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  77m  3s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-azure |
   |  |  Null pointer dereference of op in 
org.apache.hadoop.fs.azurebfs.services.AbfsClient.appendBlobAppend(String, 
long, byte[], int, int, String) on exception path  Dereferenced at 
AbfsClient.java:in 
org.apache.hadoop.fs.azurebfs.services.AbfsClient.appendBlobAppend(String, 
long, byte[], int, int, String) on exception path  Dereferenced at 
AbfsClient.java:[line 417] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2509/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2509 |
   | Optional Tests | dupname asflicense comp

[GitHub] [hadoop] hadoop-yetus commented on pull request #2509: HADOOP-17404. ABFS: Small write - Merge append and flush

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509#issuecomment-736801677


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 58s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/diff-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2509/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 8 new + 7 unchanged - 0 
fixed = 15 total (was 7)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | -1 :x: |  findbugs  |   1m  0s | 
[/new-findbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2509/1/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  77m  3s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-azure |
   |  |  Null pointer dereference of op in 
org.apache.hadoop.fs.azurebfs.services.AbfsClient.appendBlobAppend(String, 
long, byte[], int, int, String) on exception path  Dereferenced at 
AbfsClient.java:in 
org.apache.hadoop.fs.azurebfs.services.AbfsClient.appendBlobAppend(String, 
long, byte[], int, int, String) on exception path  Dereferenced at 
AbfsClient.java:[line 417] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2509/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2509 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 34223cfd7dc4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6a1d7d9ed25 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/li

[jira] [Updated] (HADOOP-17404) ABFS: Piggyback flush on Append calls for short writes

2020-12-01 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan updated HADOOP-17404:
---
Status: Patch Available  (was: Open)

> ABFS: Piggyback flush on Append calls for short writes
> --
>
> Key: HADOOP-17404
> URL: https://issues.apache.org/jira/browse/HADOOP-17404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When Hflush or Hsync APIs are called, a call is made to store backend to 
> commit the data that was appended. 
> If the data size written by Hadoop app is small, i.e. data size :
>  * before any of HFlush/HSync call is made or
>  * between 2 HFlush/Hsync API calls
> is less than write buffer size, 2 separate calls, one for append and another 
> for flush is made,
> Apps that do such small writes eventually end up with almost similar number 
> of calls for flush and append.
> This PR enables Flush to be piggybacked onto append call for such short write 
> scenarios.
>  
> NOTE: The changes is guarded over a config, and is disabled by default until 
> relevant supported changes is made available on all store production clusters.
> New Config added: fs.azure.write.enableappendwithflush



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17404) ABFS: Piggyback flush on Append calls for short writes

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17404?focusedWorklogId=518607&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518607
 ]

ASF GitHub Bot logged work on HADOOP-17404:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 19:13
Start Date: 01/Dec/20 19:13
Worklog Time Spent: 10m 
  Work Description: snvijaya opened a new pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509


   When Hflush or Hsync APIs are called, a call is made to store backend to 
commit the data that was appended. 
   
   If the data size written by Hadoop app is small, i.e. data size :
   
   before any of HFlush/HSync call is made or
   between 2 HFlush/Hsync API calls
   is less than write buffer size, 2 separate calls, one for append and another 
for flush is made,
   
   Apps that do such small writes eventually end up with almost similar number 
of calls for flush and append.
   
   When Hflush or Hsync APIs are called, a call is made to store backend to 
commit the data that was appended. 
   
   If the data size written by Hadoop app is small, i.e. data size :
   
   before any of HFlush/HSync call is made or
   between 2 HFlush/Hsync API calls
   is less than write buffer size, 2 separate calls, one for append and another 
for flush is made,
   
   Apps that do such small writes eventually end up with almost similar number 
of calls for flush and append.
   
   This commit enables Flush to be piggybacked onto append call for such short 
write scenarios. This is guarded with config 
"fs.azure.write.enableappendwithflush" which is set to off by default as it 
needs a relevant change in backend to propogate.
   
   Tests asserting number of requests made, request data sizes, file sizes post 
append+flush and file content checks for various combinations of 
append/flush/close sets with and without the small write optimization is added. 
   Existing tests in ITestAbfsNetworkStatistics asserting Http stats were 
rewritten for easy readability.
   
   (Test results published in end of PR conversation tab.)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518607)
Remaining Estimate: 0h
Time Spent: 10m

> ABFS: Piggyback flush on Append calls for short writes
> --
>
> Key: HADOOP-17404
> URL: https://issues.apache.org/jira/browse/HADOOP-17404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When Hflush or Hsync APIs are called, a call is made to store backend to 
> commit the data that was appended. 
> If the data size written by Hadoop app is small, i.e. data size :
>  * before any of HFlush/HSync call is made or
>  * between 2 HFlush/Hsync API calls
> is less than write buffer size, 2 separate calls, one for append and another 
> for flush is made,
> Apps that do such small writes eventually end up with almost similar number 
> of calls for flush and append.
> This PR enables Flush to be piggybacked onto append call for such short write 
> scenarios.
>  
> NOTE: The changes is guarded over a config, and is disabled by default until 
> relevant supported changes is made available on all store production clusters.
> New Config added: fs.azure.write.enableappendwithflush



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17404) ABFS: Piggyback flush on Append calls for short writes

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17404?focusedWorklogId=518609&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518609
 ]

ASF GitHub Bot logged work on HADOOP-17404:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 19:13
Start Date: 01/Dec/20 19:13
Worklog Time Spent: 10m 
  Work Description: snvijaya commented on pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509#issuecomment-736760506


   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 398, Failures: 0, Errors: 0, Skipped: 68
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 256, Failures: 0, Errors: 0, Skipped: 165
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 448, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 254, Failures: 0, Errors: 0, Skipped: 48
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 420, Failures: 0, Errors: 0, Skipped: 233
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 256, Failures: 0, Errors: 0, Skipped: 48
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518609)
Time Spent: 20m  (was: 10m)

> ABFS: Piggyback flush on Append calls for short writes
> --
>
> Key: HADOOP-17404
> URL: https://issues.apache.org/jira/browse/HADOOP-17404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When Hflush or Hsync APIs are called, a call is made to store backend to 
> commit the data that was appended. 
> If the data size written by Hadoop app is small, i.e. data size :
>  * before any of HFlush/HSync call is made or
>  * between 2 HFlush/Hsync API calls
> is less than write buffer size, 2 separate calls, one for append and another 
> for flush is made,
> Apps that do such small writes eventually end up with almost similar number 
> of calls for flush and append.
> This PR enables Flush to be piggybacked onto append call for such short write 
> scenarios.
>  
> NOTE: The changes is guarded over a config, and is disabled by default until 
> relevant supported changes is made available on all store production clusters.
> New Config added: fs.azure.write.enableappendwithflush



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17404) ABFS: Piggyback flush on Append calls for short writes

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17404:

Labels: pull-request-available  (was: )

> ABFS: Piggyback flush on Append calls for short writes
> --
>
> Key: HADOOP-17404
> URL: https://issues.apache.org/jira/browse/HADOOP-17404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When Hflush or Hsync APIs are called, a call is made to store backend to 
> commit the data that was appended. 
> If the data size written by Hadoop app is small, i.e. data size :
>  * before any of HFlush/HSync call is made or
>  * between 2 HFlush/Hsync API calls
> is less than write buffer size, 2 separate calls, one for append and another 
> for flush is made,
> Apps that do such small writes eventually end up with almost similar number 
> of calls for flush and append.
> This PR enables Flush to be piggybacked onto append call for such short write 
> scenarios.
>  
> NOTE: The changes is guarded over a config, and is disabled by default until 
> relevant supported changes is made available on all store production clusters.
> New Config added: fs.azure.write.enableappendwithflush



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on pull request #2509: HADOOP-17404. ABFS: Small write - Merge append and flush

2020-12-01 Thread GitBox


snvijaya commented on pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509#issuecomment-736760506


   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 398, Failures: 0, Errors: 0, Skipped: 68
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 256, Failures: 0, Errors: 0, Skipped: 165
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 448, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 254, Failures: 0, Errors: 0, Skipped: 48
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 420, Failures: 0, Errors: 0, Skipped: 233
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 256, Failures: 0, Errors: 0, Skipped: 48
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya opened a new pull request #2509: HADOOP-17404. ABFS: Small write - Merge append and flush

2020-12-01 Thread GitBox


snvijaya opened a new pull request #2509:
URL: https://github.com/apache/hadoop/pull/2509


   When Hflush or Hsync APIs are called, a call is made to store backend to 
commit the data that was appended. 
   
   If the data size written by Hadoop app is small, i.e. data size :
   
   before any of HFlush/HSync call is made or
   between 2 HFlush/Hsync API calls
   is less than write buffer size, 2 separate calls, one for append and another 
for flush is made,
   
   Apps that do such small writes eventually end up with almost similar number 
of calls for flush and append.
   
   When Hflush or Hsync APIs are called, a call is made to store backend to 
commit the data that was appended. 
   
   If the data size written by Hadoop app is small, i.e. data size :
   
   before any of HFlush/HSync call is made or
   between 2 HFlush/Hsync API calls
   is less than write buffer size, 2 separate calls, one for append and another 
for flush is made,
   
   Apps that do such small writes eventually end up with almost similar number 
of calls for flush and append.
   
   This commit enables Flush to be piggybacked onto append call for such short 
write scenarios. This is guarded with config 
"fs.azure.write.enableappendwithflush" which is set to off by default as it 
needs a relevant change in backend to propogate.
   
   Tests asserting number of requests made, request data sizes, file sizes post 
append+flush and file content checks for various combinations of 
append/flush/close sets with and without the small write optimization is added. 
   Existing tests in ITestAbfsNetworkStatistics asserting Http stats were 
rewritten for easy readability.
   
   (Test results published in end of PR conversation tab.)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17404) ABFS: Piggyback flush on Append calls for short writes

2020-12-01 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan updated HADOOP-17404:
---
Description: 
When Hflush or Hsync APIs are called, a call is made to store backend to commit 
the data that was appended. 

If the data size written by Hadoop app is small, i.e. data size :
 * before any of HFlush/HSync call is made or

 * between 2 HFlush/Hsync API calls

is less than write buffer size, 2 separate calls, one for append and another 
for flush is made,

Apps that do such small writes eventually end up with almost similar number of 
calls for flush and append.

This PR enables Flush to be piggybacked onto append call for such short write 
scenarios.

 

NOTE: The changes is guarded over a config, and is disabled by default until 
relevant supported changes is made available on all store production clusters.

New Config added: fs.azure.write.enableappendwithflush

  was:
When Hflush or Hsync APIs are called, a call is made to store backend to commit 
the data that was appended. 

If the data size written by Hadoop app is small, i.e. data size :
 * before any of HFlush/HSync call is made or

 * between 2 HFlush/Hsync API calls

is less than write buffer size, 2 separate calls, one for append and another 
for flush is made,

Apps that do such small writes eventually end up with almost similar number of 
calls for flush and append.

This PR enables Flush to be piggybacked onto append call for such short write 
scenarios.

 

NOTE: The changes is guarded over a config, and is disabled by default until 
relevant supported changes is made available on all store production clusters.


> ABFS: Piggyback flush on Append calls for short writes
> --
>
> Key: HADOOP-17404
> URL: https://issues.apache.org/jira/browse/HADOOP-17404
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.1
>
>
> When Hflush or Hsync APIs are called, a call is made to store backend to 
> commit the data that was appended. 
> If the data size written by Hadoop app is small, i.e. data size :
>  * before any of HFlush/HSync call is made or
>  * between 2 HFlush/Hsync API calls
> is less than write buffer size, 2 separate calls, one for append and another 
> for flush is made,
> Apps that do such small writes eventually end up with almost similar number 
> of calls for flush and append.
> This PR enables Flush to be piggybacked onto append call for such short write 
> scenarios.
>  
> NOTE: The changes is guarded over a config, and is disabled by default until 
> relevant supported changes is made available on all store production clusters.
> New Config added: fs.azure.write.enableappendwithflush



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LeonGao91 commented on a change in pull request #2483: HDFS-14904. Option to let Balancer prefer top used nodes in each iteration.

2020-12-01 Thread GitBox


LeonGao91 commented on a change in pull request #2483:
URL: https://github.com/apache/hadoop/pull/2483#discussion_r533650857



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
##
@@ -435,6 +444,22 @@ private long init(List reports) {
 return Math.max(overLoadedBytes, underLoadedBytes);
   }
 
+  private void sortOverUtilizedNodes() {
+LOG.info("Sorting over-utilized nodes by capacity" +
+" to bring down top used datanode capacity faster");
+
+if (overUtilized instanceof List) {
+  List list = (List) overUtilized;
+  list.sort(
+  (Source source1, Source source2) ->

Review comment:
   Good idea, we should use utilization for storage type instead of 
datanode utilization. Will fix





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LeonGao91 commented on a change in pull request #2483: HDFS-14904. Option to let Balancer prefer top used nodes in each iteration.

2020-12-01 Thread GitBox


LeonGao91 commented on a change in pull request #2483:
URL: https://github.com/apache/hadoop/pull/2483#discussion_r533650482



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
##
@@ -435,6 +444,22 @@ private long init(List reports) {
 return Math.max(overLoadedBytes, underLoadedBytes);
   }
 
+  private void sortOverUtilizedNodes() {
+LOG.info("Sorting over-utilized nodes by capacity" +
+" to bring down top used datanode capacity faster");
+
+if (overUtilized instanceof List) {

Review comment:
   This is for findbugs, will check if I can get around it with precondition





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #2508: HDFS-15703. Don't generate edits for set operations that are no-op

2020-12-01 Thread GitBox


amahussein commented on pull request #2508:
URL: https://github.com/apache/hadoop/pull/2508#issuecomment-736753348


   I have checked the failing test case. They are irrelevant. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] LeonGao91 commented on a change in pull request #2483: HDFS-14904. Option to let Balancer prefer top used nodes in each iteration.

2020-12-01 Thread GitBox


LeonGao91 commented on a change in pull request #2483:
URL: https://github.com/apache/hadoop/pull/2483#discussion_r533649887



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
##
@@ -199,7 +199,10 @@
   + "\tWhether to run the balancer during an ongoing HDFS upgrade."
   + "This is usually not desired since it will not affect used space "
   + "on over-utilized machines."
-  + "\n\t[-asService]\tRun as a long running service.";
+  + "\n\t[-asService]\tRun as a long running service."
+  + "\n\t[-sortTopNodes]"
+  + "\tSort over-utilized nodes by capacity to"
+  + " bring down top used datanode faster.";

Review comment:
   Sounds good, will update





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17347) ABFS: Read optimizations

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17347?focusedWorklogId=518595&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518595
 ]

ASF GitHub Bot logged work on HADOOP-17347:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 18:52
Start Date: 01/Dec/20 18:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2464:
URL: https://github.com/apache/hadoop/pull/2464#issuecomment-736749031


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  49m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 58s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 15s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m  0s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 28s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 127m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2464/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2464 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bc7b1e4ca565 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6a1d7d9ed25 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2464/9/testReport/ |
   | Max. process+thread count | 579 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.a

[GitHub] [hadoop] hadoop-yetus commented on pull request #2464: Draft PR: HADOOP-17347. ABFS: Read optimizations

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2464:
URL: https://github.com/apache/hadoop/pull/2464#issuecomment-736749031


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  49m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 58s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 15s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m  0s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 28s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 127m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2464/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2464 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bc7b1e4ca565 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6a1d7d9ed25 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2464/9/testReport/ |
   | Max. process+thread count | 579 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2464/9/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use t

[GitHub] [hadoop] hadoop-yetus commented on pull request #2508: HDFS-15703. Don't generate edits for set operations that are no-op

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2508:
URL: https://github.com/apache/hadoop/pull/2508#issuecomment-736748415


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 10s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m 29s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 26s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 2 unchanged - 1 
fixed = 2 total (was 3)  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 14s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 115m 21s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2508/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 212m  9s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestPread |
   |   | hadoop.hdfs.TestDFSClientRetries |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2508/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2508 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 035de6b61bbd 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6a1d7d9ed25 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2508/1/testReport/ |
   | Max. process+thread count | 2758 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2508/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This mess

[jira] [Commented] (HADOOP-16881) PseudoAuthenticator does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns

2020-12-01 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17241773#comment-17241773
 ] 

Hadoop QA commented on HADOOP-16881:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 35m  
5s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
52s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 
 6s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
21s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
8s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 46s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
27s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
0s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
16s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
57s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
41s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
41s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
50s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 
50s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
56s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2507: HDFS-15702. Fix intermittent falilure of TestDecommission#testAllocAndIBRWhileDecommission.

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2507:
URL: https://github.com/apache/hadoop/pull/2507#issuecomment-736726301


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  4s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  2s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  6s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  97m  9s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2507/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 186m 27s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | hadoop.hdfs.server.datanode.TestDataNodeUUID |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2507/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2507 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 8755b25c609d 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6a1d7d9ed25 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2507/1/testReport/ |
   | Max. process+thread count | 4390 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop

[jira] [Created] (HADOOP-17404) ABFS: Piggyback flush on Append calls for short writes

2020-12-01 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-17404:
--

 Summary: ABFS: Piggyback flush on Append calls for short writes
 Key: HADOOP-17404
 URL: https://issues.apache.org/jira/browse/HADOOP-17404
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.0
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan
 Fix For: 3.3.1


When Hflush or Hsync APIs are called, a call is made to store backend to commit 
the data that was appended. 

If the data size written by Hadoop app is small, i.e. data size :
 * before any of HFlush/HSync call is made or

 * between 2 HFlush/Hsync API calls

is less than write buffer size, 2 separate calls, one for append and another 
for flush is made,

Apps that do such small writes eventually end up with almost similar number of 
calls for flush and append.

This PR enables Flush to be piggybacked onto append call for such short write 
scenarios.

 

NOTE: The changes is guarded over a config, and is disabled by default until 
relevant supported changes is made available on all store production clusters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #2489: HDFS-15695. NN should not let the balancer run in safemode

2020-12-01 Thread GitBox


amahussein commented on pull request #2489:
URL: https://github.com/apache/hadoop/pull/2489#issuecomment-736697359


   `TestBlockTokenWithDFSStriped` is not related. I am going to check if a new 
jira should be filed for that test.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2494: YARN-10380: Import logic of multi-node allocation in CapacityScheduler

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2494:
URL: https://github.com/apache/hadoop/pull/2494#issuecomment-736696538


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 46s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 44s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 47s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  89m  6s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2494/5/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 171m 26s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoQueueCreation
 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2494/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2494 |
   | JIRA Issue | YARN-10380 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7c01b9f3cae0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6a1d7d9ed25 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2494/5/testReport/ |
   | Max. process+thread count | 889 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yar

[jira] [Commented] (HADOOP-16881) PseudoAuthenticator does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns

2020-12-01 Thread Attila Magyar (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17241697#comment-17241697
 ] 

Attila Magyar commented on HADOOP-16881:


[~weichiu], I attached it to the jira, is there a way to trigger ci?

> PseudoAuthenticator does not disconnect HttpURLConnection leading to 
> CLOSE_WAIT cnxns
> -
>
> Key: HADOOP-16881
> URL: https://issues.apache.org/jira/browse/HADOOP-16881
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Attila Magyar
>Priority: Major
> Attachments: HADOOP-16881.1.patch
>
>
> PseudoAuthenticator and KerberosAuthentication does not disconnect 
> HttpURLConnection leading to lot of CLOSE_WAIT connections. YARN-8414 issue 
> is observed due to this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #2487: HDFS-15694. Avoid calling UpdateHeartBeatState inside DataNodeDescriptor

2020-12-01 Thread GitBox


amahussein commented on pull request #2487:
URL: https://github.com/apache/hadoop/pull/2487#issuecomment-736689675


   > 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2487/3/testReport/
   
   I have checked the classes TestBalancer and TestRollingUpgrade which seem to 
be intermittent in the qbt reports.
   I tested them locally and they pass.
   I will check if a new jira needs to be filed for those two tests.
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17098) Reduce Guava dependency in Hadoop source code

2020-12-01 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17241688#comment-17241688
 ] 

Ahmed Hussein commented on HADOOP-17098:


bq. What is the relationship between this JIRA and HADOOP-17288? I think with 
the latter we can avoid most of the Guava issues no?

[~csun], I am sorry I haven't seen your comment.
Shading Guava in HADOOP-17288 solves the conflicts of Guava library in Hadoop 
upstream, downstreams, and the other projects (hbase,etc..).
The Guava byte code will still be loaded and be part of the runtime.
This Jira (when complete) gets rid of the Guava byte code which has the 
following benefits:

* reduce memory fooprint. (less classes to load)
* better code management: Guava will likely has security updates that forces 
Hadoop to adopt new releases and dealing with compatibilities.
* avoid struggles analyzing guava performance

> Reduce Guava dependency in Hadoop source code
> -
>
> Key: HADOOP-17098
> URL: https://issues.apache.org/jira/browse/HADOOP-17098
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Relying on Guava implementation in Hadoop has been painful due to 
> compatibility and vulnerability issues.
>  Guava updates tend to break/deprecate APIs. This made It hard to maintain 
> backward compatibility within hadoop versions and clients/downstreams.
> With 3.x uses java8+, the java 8 features should preferred to Guava, reducing 
> the footprint, and giving stability to source code.
> This jira should serve as an umbrella toward an incremental effort to reduce 
> the usage of Guava in the source code and to create subtasks to replace Guava 
> classes with Java features.
> Furthermore, it will be good to add a rule in the pre-commit build to warn 
> against introducing a new Guava usage in certain modules.
> Any one willing to take part in this code refactoring has to:
>  # Focus on one module at a time in order to reduce the conflicts and the 
> size of the patch. This will significantly help the reviewers.
>  # Run all the unit tests related to the module being affected by the change. 
> It is critical to verify that any change will not break the unit tests, or 
> cause a stable test case to become flaky.
>  # Merge should be done to the following branches:  trunk, branch-3.3, 
> branch-3.2, branch-3.1
>  
> A list of sub tasks replacing Guava APIs with java8 features:
> {code:java}
> com.google.common.io.BaseEncoding#base64()java.util.Base64
> com.google.common.io.BaseEncoding#base64Url() java.util.Base64
> com.google.common.base.Joiner.on()
> java.lang.String#join() or 
>   
>java.util.stream.Collectors#joining()
> com.google.common.base.Optional#of()  java.util.Optional#of()
> com.google.common.base.Optional#absent()  
> java.util.Optional#empty()
> com.google.common.base.Optional#fromNullable()
> java.util.Optional#ofNullable()
> com.google.common.base.Optional   
> java.util.Optional
> com.google.common.base.Predicate  
> java.util.function.Predicate
> com.google.common.base.Function   
> java.util.function.Function
> com.google.common.base.Supplier   
> java.util.function.Supplier
> {code}
>  
> I also vote for the replacement of {{Precondition}} with either a wrapper, or 
> Apache commons lang.
> I believe you guys have dealt with Guava compatibilities in the past and 
> probably have better insights. Any thoughts? [~weichiu], [~gabor.bota], 
> [~ste...@apache.org], [~ayushtkn], [~busbey], [~jeagles], [~kihwal]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17347) ABFS: Read optimizations

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17347?focusedWorklogId=518536&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518536
 ]

ASF GitHub Bot logged work on HADOOP-17347:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 17:00
Start Date: 01/Dec/20 17:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2464:
URL: https://github.com/apache/hadoop/pull/2464#issuecomment-736683730


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 59s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 15s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 26s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2464/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2464 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e13a46fa7bd4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6a1d7d9ed25 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2464/8/testReport/ |
   | Max. process+thread count | 572 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.

[GitHub] [hadoop] hadoop-yetus commented on pull request #2464: Draft PR: HADOOP-17347. ABFS: Read optimizations

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2464:
URL: https://github.com/apache/hadoop/pull/2464#issuecomment-736683730


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 59s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 15s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 26s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2464/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2464 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e13a46fa7bd4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6a1d7d9ed25 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2464/8/testReport/ |
   | Max. process+thread count | 572 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2464/8/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use 

[jira] [Work logged] (HADOOP-17347) ABFS: Read optimizations

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17347?focusedWorklogId=518529&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518529
 ]

ASF GitHub Bot logged work on HADOOP-17347:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 16:45
Start Date: 01/Dec/20 16:45
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2464:
URL: https://github.com/apache/hadoop/pull/2464#discussion_r533561059



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
##
@@ -517,6 +527,14 @@ public int getWriteBufferSize() {
 return this.writeBufferSize;
   }
 
+  public boolean readSmallFilesCompletely() {

Review comment:
   It was finalised as the criteria to consider a file small as the size 
smaller than the buffer size.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518529)
Time Spent: 50m  (was: 40m)

> ABFS: Read optimizations
> 
>
> Key: HADOOP-17347
> URL: https://issues.apache.org/jira/browse/HADOOP-17347
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Optimize read performance for the following scenarios
>  # Read small files completely
>  Files that are of size smaller than the read buffer size can be considered 
> as small files. In case of such files it would be better to read the full 
> file into the AbfsInputStream buffer.
>  # Read last block if the read is for footer
>  If the read is for the last 8 bytes, read the full file.
>  This will optimize reads for parquet files. [Parquet file 
> format|https://www.ellicium.com/parquet-file-format-structure/]
> Both these optimizations will be present under configs as follows
>  # fs.azure.read.smallfilescompletely
>  # fs.azure.read.optimizefooterread



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #2464: Draft PR: HADOOP-17347. ABFS: Read optimizations

2020-12-01 Thread GitBox


bilaharith commented on a change in pull request #2464:
URL: https://github.com/apache/hadoop/pull/2464#discussion_r533561059



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
##
@@ -517,6 +527,14 @@ public int getWriteBufferSize() {
 return this.writeBufferSize;
   }
 
+  public boolean readSmallFilesCompletely() {

Review comment:
   It was finalised as the criteria to consider a file small as the size 
smaller than the buffer size.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17347) ABFS: Read optimizations

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17347?focusedWorklogId=518528&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518528
 ]

ASF GitHub Bot logged work on HADOOP-17347:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 16:44
Start Date: 01/Dec/20 16:44
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2464:
URL: https://github.com/apache/hadoop/pull/2464#discussion_r533560221



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
##
@@ -1218,7 +1218,7 @@ public boolean failed() {
   }
 
   @VisibleForTesting
-  AzureBlobFileSystemStore getAbfsStore() {
+  public AzureBlobFileSystemStore getAbfsStore() {

Review comment:
   Done

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
##
@@ -1242,7 +1242,7 @@ boolean getIsNamespaceEnabled() throws 
AzureBlobFileSystemException {
   }
 
   @VisibleForTesting
-  Map getInstrumentationMap() {
+  public Map getInstrumentationMap() {

Review comment:
   Done

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##
@@ -47,6 +47,10 @@
 StreamCapabilities {
   private static final Logger LOG = 
LoggerFactory.getLogger(AbfsInputStream.class);
 
+  public static final int FOOTER_SIZE = 8;
+

Review comment:
   Done

##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java
##
@@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.Random;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
+
+import static org.apache.hadoop.fs.azurebfs.AbfsStatistic.CONNECTIONS_MADE;
+import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.ONE_KB;
+import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.ONE_MB;
+
+public class ITestAbfsInputStreamReadFooter
+extends AbstractAbfsIntegrationTest {
+
+  private static final int TEN = 10;
+  private static final int TWENTY = 20;
+
+  public ITestAbfsInputStreamReadFooter() throws Exception {
+  }
+
+  @Test
+  public void testOnlyOneServerCallIsMadeWhenTheConfIsTrue() throws Exception {
+testNumBackendCalls(true);
+  }
+
+  @Test
+  public void testMultipleServerCallsAreMadeWhenTheConfIsFalse()
+  throws Exception {
+testNumBackendCalls(false);
+  }
+
+  private void testNumBackendCalls(boolean optimizeFooterRead)
+  throws Exception {
+final AzureBlobFileSystem fs = getFileSystem(optimizeFooterRead);
+for (int i = 1; i <= 4; i++) {
+  String fileName = methodName.getMethodName() + i;
+  int fileSize = i * ONE_MB;
+  byte[] fileContent = getRandomBytesArray(fileSize);
+  Path testFilePath = createFileWithContent(fs, fileName, fileContent);
+  int length = AbfsInputStream.FOOTER_SIZE;
+  try (FSDataInputStream iStream = fs.open(testFilePath)) {
+byte[] buffer = new byte[length];
+
+Map metricMap = fs.getInstrumentationMap();
+long requestsMadeBeforeTest = metricMap
+.get(CONNECTIONS_MADE.getStatName());
+
+iStream.seek(fileSize - 8);
+iStream.read(buffer, 0, length);
+
+iStream.seek(fileSize - (TEN * ONE_KB));
+iStream.read(buffer, 0, length);
+
+iStream.seek(fileSize - (TWENTY * ONE_KB));
+iStream.read(buffer, 0, length);
+
+metricMap = fs.getInstrumentationMap();
+long requestsMadeAfterTest = metricMap
+.get(CON

[GitHub] [hadoop] bilaharith commented on a change in pull request #2464: Draft PR: HADOOP-17347. ABFS: Read optimizations

2020-12-01 Thread GitBox


bilaharith commented on a change in pull request #2464:
URL: https://github.com/apache/hadoop/pull/2464#discussion_r533560221



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
##
@@ -1218,7 +1218,7 @@ public boolean failed() {
   }
 
   @VisibleForTesting
-  AzureBlobFileSystemStore getAbfsStore() {
+  public AzureBlobFileSystemStore getAbfsStore() {

Review comment:
   Done

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
##
@@ -1242,7 +1242,7 @@ boolean getIsNamespaceEnabled() throws 
AzureBlobFileSystemException {
   }
 
   @VisibleForTesting
-  Map getInstrumentationMap() {
+  public Map getInstrumentationMap() {

Review comment:
   Done

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##
@@ -47,6 +47,10 @@
 StreamCapabilities {
   private static final Logger LOG = 
LoggerFactory.getLogger(AbfsInputStream.class);
 
+  public static final int FOOTER_SIZE = 8;
+

Review comment:
   Done

##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java
##
@@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.Random;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
+
+import static org.apache.hadoop.fs.azurebfs.AbfsStatistic.CONNECTIONS_MADE;
+import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.ONE_KB;
+import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.ONE_MB;
+
+public class ITestAbfsInputStreamReadFooter
+extends AbstractAbfsIntegrationTest {
+
+  private static final int TEN = 10;
+  private static final int TWENTY = 20;
+
+  public ITestAbfsInputStreamReadFooter() throws Exception {
+  }
+
+  @Test
+  public void testOnlyOneServerCallIsMadeWhenTheConfIsTrue() throws Exception {
+testNumBackendCalls(true);
+  }
+
+  @Test
+  public void testMultipleServerCallsAreMadeWhenTheConfIsFalse()
+  throws Exception {
+testNumBackendCalls(false);
+  }
+
+  private void testNumBackendCalls(boolean optimizeFooterRead)
+  throws Exception {
+final AzureBlobFileSystem fs = getFileSystem(optimizeFooterRead);
+for (int i = 1; i <= 4; i++) {
+  String fileName = methodName.getMethodName() + i;
+  int fileSize = i * ONE_MB;
+  byte[] fileContent = getRandomBytesArray(fileSize);
+  Path testFilePath = createFileWithContent(fs, fileName, fileContent);
+  int length = AbfsInputStream.FOOTER_SIZE;
+  try (FSDataInputStream iStream = fs.open(testFilePath)) {
+byte[] buffer = new byte[length];
+
+Map metricMap = fs.getInstrumentationMap();
+long requestsMadeBeforeTest = metricMap
+.get(CONNECTIONS_MADE.getStatName());
+
+iStream.seek(fileSize - 8);
+iStream.read(buffer, 0, length);
+
+iStream.seek(fileSize - (TEN * ONE_KB));
+iStream.read(buffer, 0, length);
+
+iStream.seek(fileSize - (TWENTY * ONE_KB));
+iStream.read(buffer, 0, length);
+
+metricMap = fs.getInstrumentationMap();
+long requestsMadeAfterTest = metricMap
+.get(CONNECTIONS_MADE.getStatName());
+
+if (optimizeFooterRead) {
+  assertEquals(1, requestsMadeAfterTest - requestsMadeBeforeTest);
+} else {
+  assertEquals(3, requestsMadeAfterTest - requestsMadeBeforeTest);
+}
+  }
+}
+  }
+
+  @Test
+  public void testSeekToEndAndReadWithConfTrue() throws Exception {
+testSeekToEndAndReadWithConf(true);
+  }
+
+  @Test
+  public void testSeekToEn

[jira] [Work logged] (HADOOP-17347) ABFS: Read optimizations

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17347?focusedWorklogId=518522&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518522
 ]

ASF GitHub Bot logged work on HADOOP-17347:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 16:33
Start Date: 01/Dec/20 16:33
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2464:
URL: https://github.com/apache/hadoop/pull/2464#discussion_r533552825



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java
##
@@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.Random;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
+
+import static org.apache.hadoop.fs.azurebfs.AbfsStatistic.CONNECTIONS_MADE;
+import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.ONE_KB;
+import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.ONE_MB;
+
+public class ITestAbfsInputStreamReadFooter
+extends AbstractAbfsIntegrationTest {
+
+  private static final int TEN = 10;
+  private static final int TWENTY = 20;
+
+  public ITestAbfsInputStreamReadFooter() throws Exception {
+  }
+
+  @Test
+  public void testOnlyOneServerCallIsMadeWhenTheConfIsTrue() throws Exception {
+testNumBackendCalls(true);
+  }
+
+  @Test
+  public void testMultipleServerCallsAreMadeWhenTheConfIsFalse()
+  throws Exception {
+testNumBackendCalls(false);
+  }
+
+  private void testNumBackendCalls(boolean optimizeFooterRead)
+  throws Exception {
+final AzureBlobFileSystem fs = getFileSystem(optimizeFooterRead);
+for (int i = 1; i <= 4; i++) {
+  String fileName = methodName.getMethodName() + i;
+  int fileSize = i * ONE_MB;

Review comment:
   File sizes are now 2 MB to 6MB





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518522)
Time Spent: 0.5h  (was: 20m)

> ABFS: Read optimizations
> 
>
> Key: HADOOP-17347
> URL: https://issues.apache.org/jira/browse/HADOOP-17347
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Optimize read performance for the following scenarios
>  # Read small files completely
>  Files that are of size smaller than the read buffer size can be considered 
> as small files. In case of such files it would be better to read the full 
> file into the AbfsInputStream buffer.
>  # Read last block if the read is for footer
>  If the read is for the last 8 bytes, read the full file.
>  This will optimize reads for parquet files. [Parquet file 
> format|https://www.ellicium.com/parquet-file-format-structure/]
> Both these optimizations will be present under configs as follows
>  # fs.azure.read.smallfilescompletely
>  # fs.azure.read.optimizefooterread



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional comman

[GitHub] [hadoop] bilaharith commented on a change in pull request #2464: Draft PR: HADOOP-17347. ABFS: Read optimizations

2020-12-01 Thread GitBox


bilaharith commented on a change in pull request #2464:
URL: https://github.com/apache/hadoop/pull/2464#discussion_r533552825



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java
##
@@ -0,0 +1,199 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.util.Map;
+import java.util.Random;
+
+import org.junit.Test;
+
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest;
+import org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem;
+
+import static org.apache.hadoop.fs.azurebfs.AbfsStatistic.CONNECTIONS_MADE;
+import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.ONE_KB;
+import static 
org.apache.hadoop.fs.azurebfs.constants.FileSystemConfigurations.ONE_MB;
+
+public class ITestAbfsInputStreamReadFooter
+extends AbstractAbfsIntegrationTest {
+
+  private static final int TEN = 10;
+  private static final int TWENTY = 20;
+
+  public ITestAbfsInputStreamReadFooter() throws Exception {
+  }
+
+  @Test
+  public void testOnlyOneServerCallIsMadeWhenTheConfIsTrue() throws Exception {
+testNumBackendCalls(true);
+  }
+
+  @Test
+  public void testMultipleServerCallsAreMadeWhenTheConfIsFalse()
+  throws Exception {
+testNumBackendCalls(false);
+  }
+
+  private void testNumBackendCalls(boolean optimizeFooterRead)
+  throws Exception {
+final AzureBlobFileSystem fs = getFileSystem(optimizeFooterRead);
+for (int i = 1; i <= 4; i++) {
+  String fileName = methodName.getMethodName() + i;
+  int fileSize = i * ONE_MB;

Review comment:
   File sizes are now 2 MB to 6MB





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17347) ABFS: Read optimizations

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17347?focusedWorklogId=518520&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518520
 ]

ASF GitHub Bot logged work on HADOOP-17347:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 16:32
Start Date: 01/Dec/20 16:32
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2464:
URL: https://github.com/apache/hadoop/pull/2464#discussion_r533551769



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##
@@ -161,6 +174,14 @@ private int readOneBlock(final byte[] b, final int off, 
final int len) throws IO
 if (off < 0 || len < 0 || len > b.length - off) {
   throw new IndexOutOfBoundsException();
 }
+return 1; // 1 indicate success
+  }
+
+  private int readOneBlock(final byte[] b, final int off, final int len) 
throws IOException {
+int validation = validate(b, off, len);
+if (validation < 1) {
+  return validation;

Review comment:
   That would alter the existing flow. In certain cases it has to return 0 
and -1 in some other cases, exceptions in case validations failed for len, off 
values.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518520)
Time Spent: 20m  (was: 10m)

> ABFS: Read optimizations
> 
>
> Key: HADOOP-17347
> URL: https://issues.apache.org/jira/browse/HADOOP-17347
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Optimize read performance for the following scenarios
>  # Read small files completely
>  Files that are of size smaller than the read buffer size can be considered 
> as small files. In case of such files it would be better to read the full 
> file into the AbfsInputStream buffer.
>  # Read last block if the read is for footer
>  If the read is for the last 8 bytes, read the full file.
>  This will optimize reads for parquet files. [Parquet file 
> format|https://www.ellicium.com/parquet-file-format-structure/]
> Both these optimizations will be present under configs as follows
>  # fs.azure.read.smallfilescompletely
>  # fs.azure.read.optimizefooterread



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17347) ABFS: Read optimizations

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17347?focusedWorklogId=518519&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518519
 ]

ASF GitHub Bot logged work on HADOOP-17347:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 16:32
Start Date: 01/Dec/20 16:32
Worklog Time Spent: 10m 
  Work Description: bilaharith commented on a change in pull request #2464:
URL: https://github.com/apache/hadoop/pull/2464#discussion_r533551588



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##
@@ -141,7 +154,7 @@ public synchronized int read(final byte[] b, final int off, 
final int len) throw
 return totalReadBytes > 0 ? totalReadBytes : lastReadBytes;
   }
 
-  private int readOneBlock(final byte[] b, final int off, final int len) 
throws IOException {
+  private int validate(byte[] b, int off, int len) throws IOException {

Review comment:
   That would alter the existing flow. In certain cases it has to return 0 
and -1 in some other cases, exceptions in case validations failed for len, off 
values.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518519)
Remaining Estimate: 0h
Time Spent: 10m

> ABFS: Read optimizations
> 
>
> Key: HADOOP-17347
> URL: https://issues.apache.org/jira/browse/HADOOP-17347
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Optimize read performance for the following scenarios
>  # Read small files completely
>  Files that are of size smaller than the read buffer size can be considered 
> as small files. In case of such files it would be better to read the full 
> file into the AbfsInputStream buffer.
>  # Read last block if the read is for footer
>  If the read is for the last 8 bytes, read the full file.
>  This will optimize reads for parquet files. [Parquet file 
> format|https://www.ellicium.com/parquet-file-format-structure/]
> Both these optimizations will be present under configs as follows
>  # fs.azure.read.smallfilescompletely
>  # fs.azure.read.optimizefooterread



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17347) ABFS: Read optimizations

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17347:

Labels: pull-request-available  (was: )

> ABFS: Read optimizations
> 
>
> Key: HADOOP-17347
> URL: https://issues.apache.org/jira/browse/HADOOP-17347
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Optimize read performance for the following scenarios
>  # Read small files completely
>  Files that are of size smaller than the read buffer size can be considered 
> as small files. In case of such files it would be better to read the full 
> file into the AbfsInputStream buffer.
>  # Read last block if the read is for footer
>  If the read is for the last 8 bytes, read the full file.
>  This will optimize reads for parquet files. [Parquet file 
> format|https://www.ellicium.com/parquet-file-format-structure/]
> Both these optimizations will be present under configs as follows
>  # fs.azure.read.smallfilescompletely
>  # fs.azure.read.optimizefooterread



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #2464: Draft PR: HADOOP-17347. ABFS: Read optimizations

2020-12-01 Thread GitBox


bilaharith commented on a change in pull request #2464:
URL: https://github.com/apache/hadoop/pull/2464#discussion_r533551769



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##
@@ -161,6 +174,14 @@ private int readOneBlock(final byte[] b, final int off, 
final int len) throws IO
 if (off < 0 || len < 0 || len > b.length - off) {
   throw new IndexOutOfBoundsException();
 }
+return 1; // 1 indicate success
+  }
+
+  private int readOneBlock(final byte[] b, final int off, final int len) 
throws IOException {
+int validation = validate(b, off, len);
+if (validation < 1) {
+  return validation;

Review comment:
   That would alter the existing flow. In certain cases it has to return 0 
and -1 in some other cases, exceptions in case validations failed for len, off 
values.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bilaharith commented on a change in pull request #2464: Draft PR: HADOOP-17347. ABFS: Read optimizations

2020-12-01 Thread GitBox


bilaharith commented on a change in pull request #2464:
URL: https://github.com/apache/hadoop/pull/2464#discussion_r533551588



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##
@@ -141,7 +154,7 @@ public synchronized int read(final byte[] b, final int off, 
final int len) throw
 return totalReadBytes > 0 ? totalReadBytes : lastReadBytes;
   }
 
-  private int readOneBlock(final byte[] b, final int off, final int len) 
throws IOException {
+  private int validate(byte[] b, int off, int len) throws IOException {

Review comment:
   That would alter the existing flow. In certain cases it has to return 0 
and -1 in some other cases, exceptions in case validations failed for len, off 
values.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17347) ABFS: Read optimizations

2020-12-01 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17347:
--
Description: 
Optimize read performance for the following scenarios
 # Read small files completely
 Files that are of size smaller than the read buffer size can be considered as 
small files. In case of such files it would be better to read the full file 
into the AbfsInputStream buffer.
 # Read last block if the read is for footer
 If the read is for the last 8 bytes, read the full file.
 This will optimize reads for parquet files. [Parquet file 
format|https://www.ellicium.com/parquet-file-format-structure/]

Both these optimizations will be present under configs as follows
 # fs.azure.read.smallfilescompletely
 # fs.azure.read.optimizefooterread

  was:
Optimize read performance for the following scenarios
 # Read small files completely
 Files that are of size smaller than the read buffer size can be considered as 
small files. In case of such files it would be better to read the full file 
into the AbfsInputStream buffer.
 # Read last block if the read is for footer
 If the read is for the last 8 bytes, read the full file completely.
 This will optimize reads for parquet files. [Parquet file 
format|https://www.ellicium.com/parquet-file-format-structure/]

Both these optimizations will be present under configs as follows
 # fs.azure.read.smallfilescompletely
 # fs.azure.read.optimizefooterread


> ABFS: Read optimizations
> 
>
> Key: HADOOP-17347
> URL: https://issues.apache.org/jira/browse/HADOOP-17347
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
>
> Optimize read performance for the following scenarios
>  # Read small files completely
>  Files that are of size smaller than the read buffer size can be considered 
> as small files. In case of such files it would be better to read the full 
> file into the AbfsInputStream buffer.
>  # Read last block if the read is for footer
>  If the read is for the last 8 bytes, read the full file.
>  This will optimize reads for parquet files. [Parquet file 
> format|https://www.ellicium.com/parquet-file-format-structure/]
> Both these optimizations will be present under configs as follows
>  # fs.azure.read.smallfilescompletely
>  # fs.azure.read.optimizefooterread



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17347) ABFS: Read optimizations

2020-12-01 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17347:
--
Description: 
Optimize read performance for the following scenarios
 # Read small files completely
 Files that are of size smaller than the read buffer size can be considered as 
small files. In case of such files it would be better to read the full file 
into the AbfsInputStream buffer.
 # Read last block if the read is for footer
 If the read is for the last 8 bytes, read the full file completely.
 This will optimize reads for parquet files. [Parquet file 
format|https://www.ellicium.com/parquet-file-format-structure/]

Both these optimizations will be present under configs as follows
 # fs.azure.read.smallfilescompletely
 # fs.azure.read.optimizefooterread

  was:
Optimize read performance for the following scenarios
 # Read small files completely
 Files that are of size smaller than the read buffer size can be considered as 
small files. In case of such files it would be better to read the full file 
into the AbfsInputStream buffer.
 # Read last blok if the read is for footer
 If the read is for the last 8 bytes, read the full file completely.
 This will optimize reads for parquet files. [Parquet file 
format|https://www.ellicium.com/parquet-file-format-structure/]

Both these optimizations will be present under configs as follows
 # fs.azure.read.smallfilescompletely
 # fs.azure.read.optimizefooterread


> ABFS: Read optimizations
> 
>
> Key: HADOOP-17347
> URL: https://issues.apache.org/jira/browse/HADOOP-17347
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
>
> Optimize read performance for the following scenarios
>  # Read small files completely
>  Files that are of size smaller than the read buffer size can be considered 
> as small files. In case of such files it would be better to read the full 
> file into the AbfsInputStream buffer.
>  # Read last block if the read is for footer
>  If the read is for the last 8 bytes, read the full file completely.
>  This will optimize reads for parquet files. [Parquet file 
> format|https://www.ellicium.com/parquet-file-format-structure/]
> Both these optimizations will be present under configs as follows
>  # fs.azure.read.smallfilescompletely
>  # fs.azure.read.optimizefooterread



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17347) ABFS: Read optimizations

2020-12-01 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17347:
--
Description: 
Optimize read performance for the following scenarios
 # Read small files completely
 Files that are of size smaller than the read buffer size can be considered as 
small files. In case of such files it would be better to read the full file 
into the AbfsInputStream buffer.
 # Read last blok if the read is for footer
 If the read is for the last 8 bytes, read the full file completely.
 This will optimize reads for parquet files. [Parquet file 
format|https://www.ellicium.com/parquet-file-format-structure/]

Both these optimizations will be present under configs as follows
 # fs.azure.read.smallfilescompletely
 # fs.azure.read.optimizefooterread

  was:
Optimize read performance for the following scenarios
 # Read small files completely
 Files that are of size smaller than the read buffer size can be considered as 
small files. In case of such files it would be better to read the full file 
into the AbfsInputStream buffer.
 # Read last blok if the read is for footer
 If the read is for the last 8 bytes, read the full file completely.
 This will optimize reads for parquet files. [Parquet file 
format|https://www.ellicium.com/parquet-file-format-structure/]


> ABFS: Read optimizations
> 
>
> Key: HADOOP-17347
> URL: https://issues.apache.org/jira/browse/HADOOP-17347
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
>
> Optimize read performance for the following scenarios
>  # Read small files completely
>  Files that are of size smaller than the read buffer size can be considered 
> as small files. In case of such files it would be better to read the full 
> file into the AbfsInputStream buffer.
>  # Read last blok if the read is for footer
>  If the read is for the last 8 bytes, read the full file completely.
>  This will optimize reads for parquet files. [Parquet file 
> format|https://www.ellicium.com/parquet-file-format-structure/]
> Both these optimizations will be present under configs as follows
>  # fs.azure.read.smallfilescompletely
>  # fs.azure.read.optimizefooterread



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17347) ABFS: Read optimizations

2020-12-01 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17347:
--
Description: 
Optimize read performance for the following scenarios
 # Read small files completely
 Files that are of size smaller than the read buffer size can be considered as 
small files. In case of such files it would be better to read the full file 
into the AbfsInputStream buffer.
 # Read last blok if the read is for footer
 If the read is for the last 8 bytes, read the full file completely.
 This will optimize reads for parquet files. [Parquet file 
format|https://www.ellicium.com/parquet-file-format-structure/]

  was:Files that are of size smaller than the read buffer size can be 
considered as small files. In case of such files it would be better to read the 
full file into the AbfsInputStream buffer.


> ABFS: Read optimizations
> 
>
> Key: HADOOP-17347
> URL: https://issues.apache.org/jira/browse/HADOOP-17347
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
>
> Optimize read performance for the following scenarios
>  # Read small files completely
>  Files that are of size smaller than the read buffer size can be considered 
> as small files. In case of such files it would be better to read the full 
> file into the AbfsInputStream buffer.
>  # Read last blok if the read is for footer
>  If the read is for the last 8 bytes, read the full file completely.
>  This will optimize reads for parquet files. [Parquet file 
> format|https://www.ellicium.com/parquet-file-format-structure/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17347) ABFS: Read optimizations

2020-12-01 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17347:
--
Summary: ABFS: Read optimizations  (was: ABFS: Read small files completely)

> ABFS: Read optimizations
> 
>
> Key: HADOOP-17347
> URL: https://issues.apache.org/jira/browse/HADOOP-17347
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Major
>
> Files that are of size smaller than the read buffer size can be considered as 
> small files. In case of such files it would be better to read the full file 
> into the AbfsInputStream buffer.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jiwq commented on a change in pull request #2494: YARN-10380: Import logic of multi-node allocation in CapacityScheduler

2020-12-01 Thread GitBox


jiwq commented on a change in pull request #2494:
URL: https://github.com/apache/hadoop/pull/2494#discussion_r533502841



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
##
@@ -544,44 +544,73 @@ static void schedule(CapacityScheduler cs) throws 
InterruptedException{
 if(nodeSize == 0) {
   return;
 }
-int start = random.nextInt(nodeSize);
+if (!cs.multiNodePlacementEnabled) {
+  // First randomize the start point
+  int current = 0;
+  int start = random.nextInt(nodeSize);
 
-// To avoid too verbose DEBUG logging, only print debug log once for
-// every 10 secs.
-boolean printSkipedNodeLogging = false;
-if (Time.monotonicNow() / 1000 % 10 == 0) {
-  printSkipedNodeLogging = (!printedVerboseLoggingForAsyncScheduling);
-} else {
-  printedVerboseLoggingForAsyncScheduling = false;
-}
+  // To avoid too verbose DEBUG logging, only print debug log once for
+  // every 10 secs.
+  boolean printSkipedNodeLogging = false;
+  if (Time.monotonicNow() / 1000 % 10 == 0) {
+printSkipedNodeLogging = (!printedVerboseLoggingForAsyncScheduling);
+  } else {
+printedVerboseLoggingForAsyncScheduling = false;
+  }
+
+  // Allocate containers of node [start, end)
+  for (FiCaSchedulerNode node : nodes) {
+if (current++ >= start) {
+  if (shouldSkipNodeSchedule(node, cs, printSkipedNodeLogging)) {
+continue;
+  }
+  cs.allocateContainersToNode(node.getNodeID(), false);
+}
+  }
 
-// Allocate containers of node [start, end)
-for (FiCaSchedulerNode node : nodes) {
-  if (current++ >= start) {
+  current = 0;
+
+  // Allocate containers of node [0, start)
+  for (FiCaSchedulerNode node : nodes) {
+if (current++ > start) {
+  break;
+}
 if (shouldSkipNodeSchedule(node, cs, printSkipedNodeLogging)) {
   continue;
 }
 cs.allocateContainersToNode(node.getNodeID(), false);
   }
-}
-
-current = 0;
 
-// Allocate containers of node [0, start)
-for (FiCaSchedulerNode node : nodes) {
-  if (current++ > start) {
-break;
+  if (printSkipedNodeLogging) {
+printedVerboseLoggingForAsyncScheduling = true;
   }
-  if (shouldSkipNodeSchedule(node, cs, printSkipedNodeLogging)) {
-continue;
+} else {
+  //Get all partitions

Review comment:
   ```suggestion
 // Get all partitions
   ```

##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/CapacityScheduler.java
##
@@ -544,44 +544,73 @@ static void schedule(CapacityScheduler cs) throws 
InterruptedException{
 if(nodeSize == 0) {
   return;
 }
-int start = random.nextInt(nodeSize);
+if (!cs.multiNodePlacementEnabled) {
+  // First randomize the start point
+  int current = 0;
+  int start = random.nextInt(nodeSize);
 
-// To avoid too verbose DEBUG logging, only print debug log once for
-// every 10 secs.
-boolean printSkipedNodeLogging = false;
-if (Time.monotonicNow() / 1000 % 10 == 0) {
-  printSkipedNodeLogging = (!printedVerboseLoggingForAsyncScheduling);
-} else {
-  printedVerboseLoggingForAsyncScheduling = false;
-}
+  // To avoid too verbose DEBUG logging, only print debug log once for
+  // every 10 secs.
+  boolean printSkipedNodeLogging = false;
+  if (Time.monotonicNow() / 1000 % 10 == 0) {
+printSkipedNodeLogging = (!printedVerboseLoggingForAsyncScheduling);
+  } else {
+printedVerboseLoggingForAsyncScheduling = false;
+  }
+
+  // Allocate containers of node [start, end)
+  for (FiCaSchedulerNode node : nodes) {
+if (current++ >= start) {
+  if (shouldSkipNodeSchedule(node, cs, printSkipedNodeLogging)) {
+continue;
+  }
+  cs.allocateContainersToNode(node.getNodeID(), false);
+}
+  }
 
-// Allocate containers of node [start, end)
-for (FiCaSchedulerNode node : nodes) {
-  if (current++ >= start) {
+  current = 0;
+
+  // Allocate containers of node [0, start)
+  for (FiCaSchedulerNode node : nodes) {
+if (current++ > start) {
+  break;
+}
 if (shouldSkipNodeSchedule(node, cs, printSkipedNodeLogging)) {
   continue;
 }
 cs.allocateContainersToNode(node.getNodeID(), false);
   }
-}
-
-current = 0;
 
-// Allocate containers of node [0, start)
-for (FiCaSchedulerNode node : nodes) {
-  if (current++ > start) {
-break;
+  if (printSkipedNodeLogging) {
+printedVerboseLogg

[GitHub] [hadoop] amahussein commented on a change in pull request #2501: HDFS-15648. TestFileChecksum should be parameterized with a boolean flag.

2020-12-01 Thread GitBox


amahussein commented on a change in pull request #2501:
URL: https://github.com/apache/hadoop/pull/2501#discussion_r533502898



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileChecksum.java
##
@@ -220,7 +217,7 @@ public void testStripedAndReplicatedFileChecksum() throws 
Exception {
 FileChecksum replicatedFileChecksum = getFileChecksum(replicatedFile,
 10, false);
 
-if (expectComparableStripedAndReplicatedFiles()) {
+if (expectComparableStripedAndReplicatedFiles) {

Review comment:
   We could keep the methods which returns the `isCompositeCRC` flag. This 
will reduce the diff and make the implementation flexible and maintainable for 
future changes.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileChecksum.java
##
@@ -77,6 +81,31 @@
   private String stripedFile2 = ecDir + "/stripedFileChecksum2";
   private String replicatedFile = "/replicatedFileChecksum";
 
+  private String checksumCombineMode;
+  private boolean expectComparableStripedAndReplicatedFiles;
+  private boolean expectComparableDifferentBlockSizeReplicatedFiles;

Review comment:
   Do we need three boolean fields? It seems that the three booleans have 
the same value.
   Maybe we should simplify the implementation by replacing the three fields by 
a single field (i.e., `isCompositeCRC`)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein opened a new pull request #2508: HDFS-15703. Don't generate edits for set operations that are no-op

2020-12-01 Thread GitBox


amahussein opened a new pull request #2508:
URL: https://github.com/apache/hadoop/pull/2508


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16881) PseudoAuthenticator does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns

2020-12-01 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HADOOP-16881:
---
Status: Patch Available  (was: Open)

> PseudoAuthenticator does not disconnect HttpURLConnection leading to 
> CLOSE_WAIT cnxns
> -
>
> Key: HADOOP-16881
> URL: https://issues.apache.org/jira/browse/HADOOP-16881
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Attila Magyar
>Priority: Major
> Attachments: HADOOP-16881.1.patch
>
>
> PseudoAuthenticator and KerberosAuthentication does not disconnect 
> HttpURLConnection leading to lot of CLOSE_WAIT connections. YARN-8414 issue 
> is observed due to this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16881) PseudoAuthenticator does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns

2020-12-01 Thread Attila Magyar (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Magyar updated HADOOP-16881:
---
Attachment: HADOOP-16881.1.patch

> PseudoAuthenticator does not disconnect HttpURLConnection leading to 
> CLOSE_WAIT cnxns
> -
>
> Key: HADOOP-16881
> URL: https://issues.apache.org/jira/browse/HADOOP-16881
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Attila Magyar
>Priority: Major
> Attachments: HADOOP-16881.1.patch
>
>
> PseudoAuthenticator and KerberosAuthentication does not disconnect 
> HttpURLConnection leading to lot of CLOSE_WAIT connections. YARN-8414 issue 
> is observed due to this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on pull request #2501: HDFS-15648. TestFileChecksum should be parameterized with a boolean flag.

2020-12-01 Thread GitBox


iwasakims commented on pull request #2501:
URL: https://github.com/apache/hadoop/pull/2501#issuecomment-736611599


   Obviously the failure of TestDecommissionWithBackoffMonitor is not relevant. 
[HDFS-15702](https://issues.apache.org/jira/browse/HDFS-15702) should fix it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims opened a new pull request #2507: HDFS-15702. Fix intermittent falilure of TestDecommission#testAllocAndIBRWhileDecommission.

2020-12-01 Thread GitBox


iwasakims opened a new pull request #2507:
URL: https://github.com/apache/hadoop/pull/2507


   https://issues.apache.org/jira/browse/HDFS-15702
   
   dfs.blockreport.intervalMsec is set to 1000 in the TestDecommission. While 
the testAllocAndIBRWhileDecommission tries to stop IBRs by 
DataNodeTestUtils#pauseIBR, IBR still can be sent in the code path of FBR 
(BPServiceActor#blockReport). Avoiding FBR (except for the 1st BR) by setting 
dfs.blockreport.intervalMsec to enough long value should fix the issue. The 
testAllocAndIBRWhileDecommission does not depend on FBR.
   
   The test failure can not be reproduced in 100 runs after applying the patch 
on my local.
   
   ```
   for i in `seq 100` ; do echo $i && mvn test -DignoreTestFailure=false 
-Dtest=TestDecommission#testAllocAndIBRWhileDecommission || break ; done
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16881) PseudoAuthenticator does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns

2020-12-01 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17241596#comment-17241596
 ] 

Wei-Chiu Chuang commented on HADOOP-16881:
--

[~amagyar] since you have a patch, I added you to the contributor list and you 
should be able to attach a patch file now.

> PseudoAuthenticator does not disconnect HttpURLConnection leading to 
> CLOSE_WAIT cnxns
> -
>
> Key: HADOOP-16881
> URL: https://issues.apache.org/jira/browse/HADOOP-16881
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Attila Magyar
>Priority: Major
>
> PseudoAuthenticator and KerberosAuthentication does not disconnect 
> HttpURLConnection leading to lot of CLOSE_WAIT connections. YARN-8414 issue 
> is observed due to this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16881) PseudoAuthenticator does not disconnect HttpURLConnection leading to CLOSE_WAIT cnxns

2020-12-01 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-16881:


Assignee: Attila Magyar  (was: Prabhu Joseph)

> PseudoAuthenticator does not disconnect HttpURLConnection leading to 
> CLOSE_WAIT cnxns
> -
>
> Key: HADOOP-16881
> URL: https://issues.apache.org/jira/browse/HADOOP-16881
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth, security
>Affects Versions: 3.3.0
>Reporter: Prabhu Joseph
>Assignee: Attila Magyar
>Priority: Major
>
> PseudoAuthenticator and KerberosAuthentication does not disconnect 
> HttpURLConnection leading to lot of CLOSE_WAIT connections. YARN-8414 issue 
> is observed due to this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2494: YARN-10380: Import logic of multi-node allocation in CapacityScheduler

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2494:
URL: https://github.com/apache/hadoop/pull/2494#issuecomment-736575193


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 47s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 46s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 33s | 
[/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2494/4/artifact/out/diff-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 107 unchanged - 0 fixed = 108 total (was 107)  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 48s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  89m 15s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2494/4/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 170m  0s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2494/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2494 |
   | JIRA Issue | YARN-10380 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 04738fc9bb84 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6a1d7d9ed25 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ub

[jira] [Commented] (HADOOP-14115) SimpleDateFormatter's are construted w/default Locale, causing malformed dates on some platforms

2020-12-01 Thread Kevin Risden (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17241559#comment-17241559
 ] 

Kevin Risden commented on HADOOP-14115:
---

HADOOP-15681 might be the same as this ticket.

> SimpleDateFormatter's are construted w/default Locale, causing malformed 
> dates on some platforms
> 
>
> Key: HADOOP-14115
> URL: https://issues.apache.org/jira/browse/HADOOP-14115
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris M. Hostetter
>Priority: Major
>
> In at least one place I know of in Hadoop, {{SimpleDateFormatter}} is used to 
> serialize {{Date}} object in a format intended for machine consumption -- and 
> should be following strict formatting rules -- but the 
> {{SimpleDateFormatter}}  instance is not constructed with an explicit 
> {{Locale}} so the platform default is used instead.  This causes things like 
> "Day name in week" ({{E}}) to generate unexpected results depending on the 
> Locale of the machine where the code is running, resulting in date-time 
> strings that violate the formatting rules.
> A specific example of this is {{AuthenticationFilter.createAuthCookie}} which 
> has code that looks like this...
> {code}
>   Date date = new Date(expires);
>   SimpleDateFormat df = new SimpleDateFormat("EEE, " +
>   "dd-MMM- HH:mm:ss zzz");
>   df.setTimeZone(TimeZone.getTimeZone("GMT"));
>   sb.append("; Expires=").append(df.format(date));
> {code}
> ...which can cause invalid expiration attributes in the {{Set-Cookies}} 
> header like this (as noted by http-commons's {{ResponseProcessCookies}} 
> class...
> {noformat}
> WARN: Invalid cookie header: "Set-Cookie: hadoop.auth=; Path=/; 
> Domain=127.0.0.1; Expires=Ara, 01-Sa-1970 00:00:00 GMT; HttpOnly". Invalid 
> 'expires' attribute: Ara, 01-Sa-1970 00:00:00 GMT
> {noformat}
> There are very likely many other places in the hadoop code base where the 
> default {{Locale}} is being unintentionally used when formatting 
> Dates/Numbers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15681) AuthenticationFilter should generate valid date format for Set-Cookie header regardless of default Locale

2020-12-01 Thread Kevin Risden (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17241558#comment-17241558
 ] 

Kevin Risden commented on HADOOP-15681:
---

HADOOP-14115 might be related/duplicate/resolved because of this ticket.

> AuthenticationFilter should generate valid date format for Set-Cookie header 
> regardless of default Locale
> -
>
> Key: HADOOP-15681
> URL: https://issues.apache.org/jira/browse/HADOOP-15681
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.2.0
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Minor
> Fix For: 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HADOOP-15681.patch
>
>
> Hi guys,
> When I try to set up Hadoop Kerberos authentication for Solr (HTTP2), I met 
> this exception:
> {code}
> java.lang.IllegalArgumentException: null
>   at org.eclipse.jetty.http2.hpack.Huffman.octetsNeeded(Huffman.java:435) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.hpack.Huffman.octetsNeeded(Huffman.java:409) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encodeValue(HpackEncoder.java:368) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encode(HpackEncoder.java:302) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.hpack.HpackEncoder.encode(HpackEncoder.java:179) 
> ~[http2-hpack-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.HeadersGenerator.generateHeaders(HeadersGenerator.java:72)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.HeadersGenerator.generate(HeadersGenerator.java:56)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.generator.Generator.control(Generator.java:80) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.HTTP2Session$ControlEntry.generate(HTTP2Session.java:1163)
>  ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Flusher.process(HTTP2Flusher.java:184) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241)
>  ~[jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:224) 
> ~[jetty-util-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Session.frame(HTTP2Session.java:685) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Session.frames(HTTP2Session.java:657) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.http2.HTTP2Stream.headers(HTTP2Stream.java:107) 
> ~[http2-common-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.server.HttpTransportOverHTTP2.sendHeadersFrame(HttpTransportOverHTTP2.java:235)
>  ~[http2-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.http2.server.HttpTransportOverHTTP2.send(HttpTransportOverHTTP2.java:134)
>  ~[http2-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:790) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:846) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:240) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:216) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpOutput.close(HttpOutput.java:298) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.HttpWriter.close(HttpWriter.java:49) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.ResponseWriter.close(ResponseWriter.java:163) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at org.eclipse.jetty.server.Response.closeOutput(Response.java:1038) 
> ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.generateAcceptableResponse(ErrorHandler.java:178)
>  ~[jetty-server-9.4.11.v20180605.jar:9.4.11.v20180605]
>   at 
> org.eclipse.jetty.server.handler.ErrorHandler.doError(ErrorHandler.java:142) 
> ~[jetty-server-9.4.11.v2018

[jira] [Work logged] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12760?focusedWorklogId=518435&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518435
 ]

ASF GitHub Bot logged work on HADOOP-12760:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 14:00
Start Date: 01/Dec/20 14:00
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2506:
URL: https://github.com/apache/hadoop/pull/2506#issuecomment-736569838


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  12m 59s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ branch-2.10 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  16m 53s |  branch-2.10 passed  |
   | +1 :green_heart: |  compile  |  14m 19s |  branch-2.10 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  branch-2.10 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  branch-2.10 passed  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  branch-2.10 passed  |
   | +0 :ok: |  spotbugs  |   2m 36s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 33s |  branch-2.10 passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 25s |  hadoop-common in the patch failed.  |
   | -1 :x: |  compile  |   0m 48s |  root in the patch failed.  |
   | -1 :x: |  javac  |   0m 48s |  root in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 32s |  
hadoop-common-project/hadoop-common: The patch generated 7 new + 131 unchanged 
- 0 fixed = 138 total (was 131)  |
   | -1 :x: |  mvnsite  |   0m 23s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  javadoc  |   0m 19s |  hadoop-common in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 25s |  hadoop-common in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 21s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  56m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2506 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2dbd2e5a7b21 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-2.10 / 2fe36b0 |
   | Default Java | Oracle Corporation-1.7.0_95-b00 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/patch-compile-root.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
   | mvnsite | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common.txt
 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/patch-findbugs-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/testReport/ |
   | Max. process+thread count | 99 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/console |
   | versions | git=2.7.4 maven=3.3.9 fin

[GitHub] [hadoop] hadoop-yetus commented on pull request #2506: [WIP] Backport HADOOP-12760 to support JDK 11

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2506:
URL: https://github.com/apache/hadoop/pull/2506#issuecomment-736569838


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  12m 59s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ branch-2.10 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  16m 53s |  branch-2.10 passed  |
   | +1 :green_heart: |  compile  |  14m 19s |  branch-2.10 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  branch-2.10 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  branch-2.10 passed  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  branch-2.10 passed  |
   | +0 :ok: |  spotbugs  |   2m 36s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 33s |  branch-2.10 passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 25s |  hadoop-common in the patch failed.  |
   | -1 :x: |  compile  |   0m 48s |  root in the patch failed.  |
   | -1 :x: |  javac  |   0m 48s |  root in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 32s |  
hadoop-common-project/hadoop-common: The patch generated 7 new + 131 unchanged 
- 0 fixed = 138 total (was 131)  |
   | -1 :x: |  mvnsite  |   0m 23s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  javadoc  |   0m 19s |  hadoop-common in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 25s |  hadoop-common in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 21s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  56m 49s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2506 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2dbd2e5a7b21 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-2.10 / 2fe36b0 |
   | Default Java | Oracle Corporation-1.7.0_95-b00 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/patch-compile-root.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
   | mvnsite | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common.txt
 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/patch-findbugs-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/testReport/ |
   | Max. process+thread count | 99 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure a

[jira] [Work logged] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12760?focusedWorklogId=518426&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518426
 ]

ASF GitHub Bot logged work on HADOOP-12760:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 13:45
Start Date: 01/Dec/20 13:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2506:
URL: https://github.com/apache/hadoop/pull/2506#issuecomment-736560680


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  17m 25s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ branch-2.10 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  16m 58s |  branch-2.10 passed  |
   | +1 :green_heart: |  compile  |  14m 14s |  branch-2.10 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  branch-2.10 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  branch-2.10 passed  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  branch-2.10 passed  |
   | +0 :ok: |  spotbugs  |   2m 35s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 32s |  branch-2.10 passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 20s |  hadoop-common in the patch failed.  |
   | -1 :x: |  compile  |   0m 46s |  root in the patch failed.  |
   | -1 :x: |  javac  |   0m 46s |  root in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 30s |  
hadoop-common-project/hadoop-common: The patch generated 7 new + 131 unchanged 
- 0 fixed = 138 total (was 131)  |
   | -1 :x: |  mvnsite  |   0m 24s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  javadoc  |   0m 18s |  hadoop-common in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 22s |  hadoop-common in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 24s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  61m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2506 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1d288f3daaf9 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-2.10 / 2fe36b0 |
   | Default Java | Oracle Corporation-1.7.0_95-b00 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/patch-compile-root.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
   | mvnsite | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common.txt
 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/patch-findbugs-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/testReport/ |
   | Max. process+thread count | 66 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/console |
   | versions | git=2.7.4 maven=3.3.9 f

[GitHub] [hadoop] hadoop-yetus commented on pull request #2506: [WIP] Backport HADOOP-12760 to support JDK 11

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2506:
URL: https://github.com/apache/hadoop/pull/2506#issuecomment-736560680


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  17m 25s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ branch-2.10 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  16m 58s |  branch-2.10 passed  |
   | +1 :green_heart: |  compile  |  14m 14s |  branch-2.10 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 41s |  branch-2.10 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  branch-2.10 passed  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  branch-2.10 passed  |
   | +0 :ok: |  spotbugs  |   2m 35s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 32s |  branch-2.10 passed  |
   ||| _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 20s |  hadoop-common in the patch failed.  |
   | -1 :x: |  compile  |   0m 46s |  root in the patch failed.  |
   | -1 :x: |  javac  |   0m 46s |  root in the patch failed.  |
   | -0 :warning: |  checkstyle  |   0m 30s |  
hadoop-common-project/hadoop-common: The patch generated 7 new + 131 unchanged 
- 0 fixed = 138 total (was 131)  |
   | -1 :x: |  mvnsite  |   0m 24s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | -1 :x: |  javadoc  |   0m 18s |  hadoop-common in the patch failed.  |
   | -1 :x: |  findbugs  |   0m 22s |  hadoop-common in the patch failed.  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  |   0m 24s |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  61m  8s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2506 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 1d288f3daaf9 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-2.10 / 2fe36b0 |
   | Default Java | Oracle Corporation-1.7.0_95-b00 |
   | mvninstall | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
   | compile | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/patch-compile-root.txt
 |
   | javac | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/patch-compile-root.txt
 |
   | checkstyle | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
   | mvnsite | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |
   | javadoc | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common.txt
 |
   | findbugs | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/patch-findbugs-hadoop-common-project_hadoop-common.txt
 |
   | unit | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/testReport/ |
   | Max. process+thread count | 66 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2506/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.0.1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure

[jira] [Work logged] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12760?focusedWorklogId=518407&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518407
 ]

ASF GitHub Bot logged work on HADOOP-12760:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 12:36
Start Date: 01/Dec/20 12:36
Worklog Time Spent: 10m 
  Work Description: wangyum removed a comment on pull request #2506:
URL: https://github.com/apache/hadoop/pull/2506#issuecomment-736522152


   Is `branch-2.10` an active cluster?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518407)
Time Spent: 40m  (was: 0.5h)

> sun.misc.Cleaner has moved to a new location in OpenJDK 9
> -
>
> Key: HADOOP-12760
> URL: https://issues.apache.org/jira/browse/HADOOP-12760
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Hegarty
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.0
>
> Attachments: HADOOP-12760.00.patch, HADOOP-12760.01.patch, 
> HADOOP-12760.02.patch, HADOOP-12760.03.patch, HADOOP-12760.04.patch, 
> HADOOP-12760.05.patch, HADOOP-12760.06.patch, HADOOP-12760.07.patch, 
> HADOOP-12760.08.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> This is a heads-up: there are upcoming changes in JDK 9 that will require, at 
> least, a small update to org.apache.hadoop.crypto.CryptoStreamUtils & 
> org.apache.hadoop.io.nativeio.NativeIO.
> OpenJDK issue no. 8148117: "Move sun.misc.Cleaner to jdk.internal.ref" [1], 
> will move the Cleaner class from sun.misc to jdk.internal.ref. There is 
> ongoing discussion about the possibility of providing a public supported API, 
> maybe in the JDK 9 timeframe, for releasing NIO direct buffer native memory, 
> see the core-libs-dev mail thread [2]. At the very least CryptoStreamUtils & 
> NativeIO [3] should be updated to have knowledge of the new location of the 
> JDK Cleaner.
> [1] https://bugs.openjdk.java.net/browse/JDK-8148117
> [2] 
> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038243.html
> [3] https://github.com/apache/hadoop/search?utf8=✓&q=sun.misc.Cleaner



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] wangyum removed a comment on pull request #2506: [WIP] Backport HADOOP-12760 to support JDK 11

2020-12-01 Thread GitBox


wangyum removed a comment on pull request #2506:
URL: https://github.com/apache/hadoop/pull/2506#issuecomment-736522152


   Is `branch-2.10` an active cluster?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12760?focusedWorklogId=518406&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518406
 ]

ASF GitHub Bot logged work on HADOOP-12760:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 12:30
Start Date: 01/Dec/20 12:30
Worklog Time Spent: 10m 
  Work Description: wangyum commented on pull request #2506:
URL: https://github.com/apache/hadoop/pull/2506#issuecomment-736522152


   Is `branch-2.10` an active cluster?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518406)
Time Spent: 0.5h  (was: 20m)

> sun.misc.Cleaner has moved to a new location in OpenJDK 9
> -
>
> Key: HADOOP-12760
> URL: https://issues.apache.org/jira/browse/HADOOP-12760
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Hegarty
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.0
>
> Attachments: HADOOP-12760.00.patch, HADOOP-12760.01.patch, 
> HADOOP-12760.02.patch, HADOOP-12760.03.patch, HADOOP-12760.04.patch, 
> HADOOP-12760.05.patch, HADOOP-12760.06.patch, HADOOP-12760.07.patch, 
> HADOOP-12760.08.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> This is a heads-up: there are upcoming changes in JDK 9 that will require, at 
> least, a small update to org.apache.hadoop.crypto.CryptoStreamUtils & 
> org.apache.hadoop.io.nativeio.NativeIO.
> OpenJDK issue no. 8148117: "Move sun.misc.Cleaner to jdk.internal.ref" [1], 
> will move the Cleaner class from sun.misc to jdk.internal.ref. There is 
> ongoing discussion about the possibility of providing a public supported API, 
> maybe in the JDK 9 timeframe, for releasing NIO direct buffer native memory, 
> see the core-libs-dev mail thread [2]. At the very least CryptoStreamUtils & 
> NativeIO [3] should be updated to have knowledge of the new location of the 
> JDK Cleaner.
> [1] https://bugs.openjdk.java.net/browse/JDK-8148117
> [2] 
> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038243.html
> [3] https://github.com/apache/hadoop/search?utf8=✓&q=sun.misc.Cleaner



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] wangyum commented on pull request #2506: [WIP] Backport HADOOP-12760 to support JDK 11

2020-12-01 Thread GitBox


wangyum commented on pull request #2506:
URL: https://github.com/apache/hadoop/pull/2506#issuecomment-736522152


   Is `branch-2.10` an active cluster?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12760?focusedWorklogId=518402&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518402
 ]

ASF GitHub Bot logged work on HADOOP-12760:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 12:17
Start Date: 01/Dec/20 12:17
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #2506:
URL: https://github.com/apache/hadoop/pull/2506#issuecomment-736515895


   Sorry, branch-2.7 is EoL.
   
https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches
   
   If you create a PR for one of the active branches, I can review it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518402)
Time Spent: 20m  (was: 10m)

> sun.misc.Cleaner has moved to a new location in OpenJDK 9
> -
>
> Key: HADOOP-12760
> URL: https://issues.apache.org/jira/browse/HADOOP-12760
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Hegarty
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.0
>
> Attachments: HADOOP-12760.00.patch, HADOOP-12760.01.patch, 
> HADOOP-12760.02.patch, HADOOP-12760.03.patch, HADOOP-12760.04.patch, 
> HADOOP-12760.05.patch, HADOOP-12760.06.patch, HADOOP-12760.07.patch, 
> HADOOP-12760.08.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> This is a heads-up: there are upcoming changes in JDK 9 that will require, at 
> least, a small update to org.apache.hadoop.crypto.CryptoStreamUtils & 
> org.apache.hadoop.io.nativeio.NativeIO.
> OpenJDK issue no. 8148117: "Move sun.misc.Cleaner to jdk.internal.ref" [1], 
> will move the Cleaner class from sun.misc to jdk.internal.ref. There is 
> ongoing discussion about the possibility of providing a public supported API, 
> maybe in the JDK 9 timeframe, for releasing NIO direct buffer native memory, 
> see the core-libs-dev mail thread [2]. At the very least CryptoStreamUtils & 
> NativeIO [3] should be updated to have knowledge of the new location of the 
> JDK Cleaner.
> [1] https://bugs.openjdk.java.net/browse/JDK-8148117
> [2] 
> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038243.html
> [3] https://github.com/apache/hadoop/search?utf8=✓&q=sun.misc.Cleaner



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #2506: [WIP] Backport HADOOP-12760 to support JDK 11

2020-12-01 Thread GitBox


aajisaka commented on pull request #2506:
URL: https://github.com/apache/hadoop/pull/2506#issuecomment-736515895


   Sorry, branch-2.7 is EoL.
   
https://cwiki.apache.org/confluence/display/HADOOP/EOL+%28End-of-life%29+Release+Branches
   
   If you create a PR for one of the active branches, I can review it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-12760:

Labels: pull-request-available  (was: )

> sun.misc.Cleaner has moved to a new location in OpenJDK 9
> -
>
> Key: HADOOP-12760
> URL: https://issues.apache.org/jira/browse/HADOOP-12760
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Hegarty
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.0
>
> Attachments: HADOOP-12760.00.patch, HADOOP-12760.01.patch, 
> HADOOP-12760.02.patch, HADOOP-12760.03.patch, HADOOP-12760.04.patch, 
> HADOOP-12760.05.patch, HADOOP-12760.06.patch, HADOOP-12760.07.patch, 
> HADOOP-12760.08.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a heads-up: there are upcoming changes in JDK 9 that will require, at 
> least, a small update to org.apache.hadoop.crypto.CryptoStreamUtils & 
> org.apache.hadoop.io.nativeio.NativeIO.
> OpenJDK issue no. 8148117: "Move sun.misc.Cleaner to jdk.internal.ref" [1], 
> will move the Cleaner class from sun.misc to jdk.internal.ref. There is 
> ongoing discussion about the possibility of providing a public supported API, 
> maybe in the JDK 9 timeframe, for releasing NIO direct buffer native memory, 
> see the core-libs-dev mail thread [2]. At the very least CryptoStreamUtils & 
> NativeIO [3] should be updated to have knowledge of the new location of the 
> JDK Cleaner.
> [1] https://bugs.openjdk.java.net/browse/JDK-8148117
> [2] 
> http://mail.openjdk.java.net/pipermail/core-libs-dev/2016-January/038243.html
> [3] https://github.com/apache/hadoop/search?utf8=✓&q=sun.misc.Cleaner



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-12760) sun.misc.Cleaner has moved to a new location in OpenJDK 9

2020-12-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12760?focusedWorklogId=518398&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-518398
 ]

ASF GitHub Bot logged work on HADOOP-12760:
---

Author: ASF GitHub Bot
Created on: 01/Dec/20 12:04
Start Date: 01/Dec/20 12:04
Worklog Time Spent: 10m 
  Work Description: wangyum opened a new pull request #2506:
URL: https://github.com/apache/hadoop/pull/2506


   This PR backport 
[HADOOP-12760](https://github.com/apache/hadoop/commit/5d084d7eca32cfa647a78ff6ed3c378659f5b186)
 to support JDK 11. Otherwise it will throw exception:
   ```
   Exception in thread "DataStreamer for file 
/tmp/hive-b_carmel/b_carmel/47ca70a2-7790-4acb-aea2-617efafaa54d/inuse.info" 
java.lang.NoSuchMethodError: 'sun.misc.Cleaner 
sun.nio.ch.DirectBuffer.cleaner()'
at 
org.apache.hadoop.crypto.CryptoStreamUtils.freeDB(CryptoStreamUtils.java:41)
at 
org.apache.hadoop.crypto.CryptoInputStream.freeBuffers(CryptoInputStream.java:683)
at 
org.apache.hadoop.crypto.CryptoInputStream.close(CryptoInputStream.java:317)
at java.base/java.io.FilterInputStream.close(FilterInputStream.java:179)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeStream(DFSOutputStream.java:746)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:694)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:689)
   Exception in thread "DataStreamer for file 
/user/b_carmel/.sparkStaging/application_1605512824298_0222/__spark_libs__13869030055338205442.zip"
 java.lang.NoSuchMethodError: 'sun.misc.Cleaner 
sun.nio.ch.DirectBuffer.cleaner()'
at 
org.apache.hadoop.crypto.CryptoStreamUtils.freeDB(CryptoStreamUtils.java:41)
at 
org.apache.hadoop.crypto.CryptoInputStream.freeBuffers(CryptoInputStream.java:683)
at 
org.apache.hadoop.crypto.CryptoInputStream.close(CryptoInputStream.java:317)
at java.base/java.io.FilterInputStream.close(FilterInputStream.java:179)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeStream(DFSOutputStream.java:746)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:694)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:689)
   Exception in thread "DataStreamer for file 
/user/b_carmel/.sparkStaging/application_1605512824298_0222/__spark_conf__.zip" 
java.lang.NoSuchMethodError: 'sun.misc.Cleaner 
sun.nio.ch.DirectBuffer.cleaner()'
at 
org.apache.hadoop.crypto.CryptoStreamUtils.freeDB(CryptoStreamUtils.java:41)
at 
org.apache.hadoop.crypto.CryptoInputStream.freeBuffers(CryptoInputStream.java:683)
at 
org.apache.hadoop.crypto.CryptoInputStream.close(CryptoInputStream.java:317)
at java.base/java.io.FilterInputStream.close(FilterInputStream.java:179)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeStream(DFSOutputStream.java:746)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:694)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:689)
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 518398)
Remaining Estimate: 0h
Time Spent: 10m

> sun.misc.Cleaner has moved to a new location in OpenJDK 9
> -
>
> Key: HADOOP-12760
> URL: https://issues.apache.org/jira/browse/HADOOP-12760
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Hegarty
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.2.0
>
> Attachments: HADOOP-12760.00.patch, HADOOP-12760.01.patch, 
> HADOOP-12760.02.patch, HADOOP-12760.03.patch, HADOOP-12760.04.patch, 
> HADOOP-12760.05.patch, HADOOP-12760.06.patch, HADOOP-12760.07.patch, 
> HADOOP-12760.08.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a heads-up: there are upcoming changes in JDK 9 that will require, at 
> least, a small update to org.apache.hadoop.crypto.CryptoStreamUtils & 
> org.apache.hadoop.io.nativeio.NativeIO.
> OpenJDK issue no. 8148117: "Move sun.misc.Cleaner to jdk.internal.ref" [1], 
> will move the Cleaner class from sun.misc to jdk.internal.ref. There is 
> ongoing discussion about the possibility of providing a public supported API, 
> maybe in the JDK 9 timeframe, for releasing NIO dire

[GitHub] [hadoop] wangyum opened a new pull request #2506: [WIP] Backport HADOOP-12760 to support JDK 11

2020-12-01 Thread GitBox


wangyum opened a new pull request #2506:
URL: https://github.com/apache/hadoop/pull/2506


   This PR backport 
[HADOOP-12760](https://github.com/apache/hadoop/commit/5d084d7eca32cfa647a78ff6ed3c378659f5b186)
 to support JDK 11. Otherwise it will throw exception:
   ```
   Exception in thread "DataStreamer for file 
/tmp/hive-b_carmel/b_carmel/47ca70a2-7790-4acb-aea2-617efafaa54d/inuse.info" 
java.lang.NoSuchMethodError: 'sun.misc.Cleaner 
sun.nio.ch.DirectBuffer.cleaner()'
at 
org.apache.hadoop.crypto.CryptoStreamUtils.freeDB(CryptoStreamUtils.java:41)
at 
org.apache.hadoop.crypto.CryptoInputStream.freeBuffers(CryptoInputStream.java:683)
at 
org.apache.hadoop.crypto.CryptoInputStream.close(CryptoInputStream.java:317)
at java.base/java.io.FilterInputStream.close(FilterInputStream.java:179)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeStream(DFSOutputStream.java:746)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:694)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:689)
   Exception in thread "DataStreamer for file 
/user/b_carmel/.sparkStaging/application_1605512824298_0222/__spark_libs__13869030055338205442.zip"
 java.lang.NoSuchMethodError: 'sun.misc.Cleaner 
sun.nio.ch.DirectBuffer.cleaner()'
at 
org.apache.hadoop.crypto.CryptoStreamUtils.freeDB(CryptoStreamUtils.java:41)
at 
org.apache.hadoop.crypto.CryptoInputStream.freeBuffers(CryptoInputStream.java:683)
at 
org.apache.hadoop.crypto.CryptoInputStream.close(CryptoInputStream.java:317)
at java.base/java.io.FilterInputStream.close(FilterInputStream.java:179)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeStream(DFSOutputStream.java:746)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:694)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:689)
   Exception in thread "DataStreamer for file 
/user/b_carmel/.sparkStaging/application_1605512824298_0222/__spark_conf__.zip" 
java.lang.NoSuchMethodError: 'sun.misc.Cleaner 
sun.nio.ch.DirectBuffer.cleaner()'
at 
org.apache.hadoop.crypto.CryptoStreamUtils.freeDB(CryptoStreamUtils.java:41)
at 
org.apache.hadoop.crypto.CryptoInputStream.freeBuffers(CryptoInputStream.java:683)
at 
org.apache.hadoop.crypto.CryptoInputStream.close(CryptoInputStream.java:317)
at java.base/java.io.FilterInputStream.close(FilterInputStream.java:179)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeStream(DFSOutputStream.java:746)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeInternal(DFSOutputStream.java:694)
at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:689)
   ```
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2490: YARN-10499. TestRouterWebServiceREST fails

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2490:
URL: https://github.com/apache/hadoop/pull/2490#issuecomment-736506371


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 37s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   2m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 44s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 34s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   2m 45s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 39s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  89m 30s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m 40s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 191m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2490/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2490 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ac8de38a9019 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6a1d7d9ed25 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2490/2/testReport/ |
   | Max. process+thread count | 930 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router 
U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2490/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yet

[GitHub] [hadoop] aajisaka commented on a change in pull request #2490: YARN-10499. TestRouterWebServiceREST fails

2020-12-01 Thread GitBox


aajisaka commented on a change in pull request #2490:
URL: https://github.com/apache/hadoop/pull/2490#discussion_r533161628



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/placement/CSMappingPlacementRule.java
##
@@ -181,6 +181,10 @@ private void setupGroupsForVariableContext(VariableContext 
vctx, String user)
   return;
 }
 Set groupsSet = groups.getGroupsSet(user);
+if (groupsSet.isEmpty()) {
+  LOG.warn("There are no groups for user {}", user);
+  return;

Review comment:
   Added it. Thank you @shuzirra





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2494: YARN-10380: Import logic of multi-node allocation in CapacityScheduler

2020-12-01 Thread GitBox


hadoop-yetus commented on pull request #2494:
URL: https://github.com/apache/hadoop/pull/2494#issuecomment-736312910


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 48s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 46s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 46s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  89m  3s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2494/3/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 171m 25s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2494/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2494 |
   | JIRA Issue | YARN-10380 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4b7670ffb262 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6a1d7d9ed25 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2494/3/testReport/ |
   | Max. process+thread count | 891 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/ha

  1   2   >