Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2020-12-02 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/77/

[Dec 2, 2020 7:59:00 PM] (noreply) HDFS-15695. NN should not let the balancer 
run in safemode (#2489). Contributed by Daryn Sharp and Ahmed Hussein
[Dec 2, 2020 9:38:20 PM] (noreply) HDFS-15703. Don't generate edits for set 
operations that are no-op (#2508). Contributed by Daryn Sharp and Ahmed Hussein
[Dec 2, 2020 11:53:09 PM] (noreply) HDFS-14904. Add Option to let Balancer 
prefer highly utilized nodes in each iteration (#2483). Contributed by Leon Gao.


ERROR: File 'out/email-report.txt' does not exist

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 3.2.2 - RC3

2020-12-02 Thread Xiaoqiao He
Thanks Chao Sun for your quick response.

If no more comments, I will wait for HADOOP-16080 to be resolved and
prepare another RC.
BTW, If any other issues should be involved for 3.2.2, please let me know
ASAP.  Thanks again.

Regards.
- He Xiaoqiao

On Wed, Dec 2, 2020 at 2:55 PM Chao Sun  wrote:

> Thanks Xiaoqiao for the work! Unfortunately we just discovered an issue
> (HADOOP-16080) which prevents Spark from consuming this release. I'm
> working on a PR for this and wondering if we can include the fix in the
> release too.
>
> Thanks again and apologies for the late notice.
>
> Best,
> Chao
>
> On Mon, Nov 30, 2020 at 7:37 AM Xiaoqiao He  wrote:
>
>> Hi folks,
>>
>> The release candidate (RC3) for Hadoop-3.2.2 is available now.
>> There are 22 commits[1] differences between RC3 and RC2[2].
>>
>> The RC3 is available at:
>> http://people.apache.org/~hexiaoqiao/hadoop-3.2.2-RC3
>> The RC3 tag in github is here:
>> https://github.com/apache/hadoop/tree/release-3.2.2-RC3
>> The maven artifacts are staged at:
>> https://repository.apache.org/content/repositories/orgapachehadoop-1291
>>
>> You can find my public key at:
>> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS or
>> https://people.apache.org/keys/committer/hexiaoqiao.asc directly.
>>
>> Please try the release and vote.
>>
>> Thanks,
>> He Xiaoqiao
>>
>> [1]
>>
>> https://github.com/apache/hadoop/compare/release-3.2.2-RC2...release-3.2.2-RC3
>> [2]
>>
>> https://lists.apache.org/thread.html/r606fff445847bdb85bd60c5a73b2fb1f0433ee31b18c456a2231fcec%40%3Chdfs-dev.hadoop.apache.org%3E
>> [3]
>> https://issues.apache.org/jira/secure/Dashboard.jspa?selectPageId=12335948
>>
>


[jira] [Resolved] (HDFS-14904) Add Option to let Balancer prefer highly utilized nodes in each iteration

2020-12-02 Thread Jing Zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao resolved HDFS-14904.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Add Option to let Balancer prefer highly utilized nodes in each iteration
> -
>
> Key: HDFS-14904
> URL: https://issues.apache.org/jira/browse/HDFS-14904
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer  mover
>Reporter: Leon Gao
>Assignee: Leon Gao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Normally the most important purpose for HDFS balancer is to reduce the top 
> used node to prevent datanode usage from being too high.
> Currently, balancer almost randomly picks nodes as sources regardless of 
> usage, which makes it slow to bring down the top used datanodes in the 
> cluster, when there are less underutilized nodes in the cluster (consider 
> expansion).
> We can add an option to prefer top used nodes first in each iteration, as 
> suggested in HDFS-14894 .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-15695) NN should not let the balancer run in safemode

2020-12-02 Thread Jim Brennan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan resolved HDFS-15695.

Fix Version/s: 3.2.3
   3.1.5
   3.4.0
   3.3.1
   Resolution: Fixed

Thanks [~daryn] and [~ahussein]!

I have committed this to trunk, branch-3.3, branch-3.2, and branch-3.1.

 

> NN should not let the balancer run in safemode
> --
>
> Key: HDFS-15695
> URL: https://issues.apache.org/jira/browse/HDFS-15695
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> [~daryn] reported that when the balancer moves a block, the target DN block 
> reports the new location and hints to invalidate the source DN. The NN will 
> not issue invalidations in safemode, so every moved block appears to be in 
> excess. The data structures bloat and greatly increase the chance of a full 
> GC.
> The NN should refuse to provide block locations to the balancer while in 
> safemode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-12-02 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/343/

[Dec 1, 2020 9:02:53 PM] (noreply) HDFS-15694. Avoid calling 
UpdateHeartBeatState inside DataNodeDescriptor. (#2487) Contributed by Kuhu 
Shukla and Ahmed Hussein
[Dec 1, 2020 10:06:47 PM] (Eric Payne) YARN-10278: CapacityScheduler test 
framework ProportionalCapacityPreemptionPolicyMockFramework. Contributed by 
Szilard Nemeth (snemeth)




-1 overall


The following subsystems voted -1:
asflicense compile findbugs golang mvninstall mvnsite pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.metrics2.source.TestJvmMetrics 
   hadoop.hdfs.TestDFSStorageStateRecovery 
   hadoop.hdfs.TestBlockStoragePolicy 
   hadoop.hdfs.TestPread 
   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.TestDFSStartupVersions 
   hadoop.hdfs.TestGetBlocks 
   hadoop.hdfs.TestDFSInotifyEventInputStream 
   hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy 
   hadoop.hdfs.TestReadStripedFileWithDNFailure 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy 
   hadoop.hdfs.TestDecommission 
   hadoop.hdfs.TestErasureCodingPolicies 
   hadoop.hdfs.web.TestWebHdfsUrl 
   hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy 
   hadoop.hdfs.TestHFlush 
   hadoop.hdfs.server.federation.router.TestRouterRpcSingleNS 
   
hadoop.hdfs.server.federation.router.TestRouterRPCMultipleDestinationMountTableResolver
 
   hadoop.hdfs.server.federation.router.TestRouterFsck 
   hadoop.hdfs.server.federation.router.TestRouterMultiRack 
   hadoop.hdfs.server.federation.router.TestRouter 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.router.TestRouterFaultTolerant 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeMonitoring 
   hadoop.hdfs.server.federation.router.TestRouterAllResolver 
   
hadoop.hdfs.server.federation.router.TestRouterMountTableCacheRefreshSecure 
   hadoop.hdfs.server.federation.router.TestRouterRPCClientRetries 
   hadoop.hdfs.server.federation.router.TestDisableNameservices 
   hadoop.hdfs.server.federation.router.TestRouterRpc 
   hadoop.hdfs.server.federation.router.TestRouterMountTableCacheRefresh 
   hadoop.hdfs.server.federation.router.TestSafeMode 
   hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload 
   hadoop.yarn.server.nodemanager.TestDefaultContainerExecutor 
   hadoop.yarn.server.nodemanager.TestNodeStatusUpdaterForLabels 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.router.webapp.TestRouterWebServicesREST 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.v2.hs.TestJobHistoryParsing 
   hadoop.yarn.service.TestYarnNativeServices 
   hadoop.tools.dynamometer.workloadgenerator.TestWorkloadGenerator 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.workloadgenerator.TestWorkloadGenerator 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
  

   mvninstall:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/343/artifact/out/patch-mvninstall-root.txt
  [272K]

   compile:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/343/artifact/out/patch-compile-root.txt
  [0]

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/343/artifact/out/patch-compile-root.txt
  [0]

   golang:

   

[jira] [Resolved] (HDFS-15670) Testcase TestBalancer#testBalancerWithPinnedBlocks always fails

2020-12-02 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki resolved HDFS-15670.
-
Resolution: Cannot Reproduce

I'm closing this as not reproducible. I guess it should be an environmental 
issue. Feel free to reopen if you have updates.

> Testcase TestBalancer#testBalancerWithPinnedBlocks always fails
> ---
>
> Key: HDFS-15670
> URL: https://issues.apache.org/jira/browse/HDFS-15670
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Jianfei Jiang
>Priority: Major
> Attachments: HADOOP-15108.000.patch
>
>
> When running testcases without any code changes, the function 
> testBalancerWithPinnedBlocks in TestBalancer.java never succeeded. I tried to 
> use Ubuntu 16.04 and redhat 7, maybe the failure is not related to various 
> linux environment. I am not sure if there is some bug in this case or I used 
> wrong environment and settings. Could anyone give some advice.
> ---
> Test set: org.apache.hadoop.hdfs.server.balancer.TestBalancer
> ---
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 100.389 sec 
> <<< FAILURE! - in org.apache.hadoop.hdfs.server.balancer.TestBalancer
> testBalancerWithPinnedBlocks(org.apache.hadoop.hdfs.server.balancer.TestBalancer)
>   Time elapsed: 100.134 sec  <<< ERROR!
> java.lang.Exception: test timed out after 10 milliseconds
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:903)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:773)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:870)
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:842)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>   at 
> org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
>   at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:441)
>   at 
> org.apache.hadoop.hdfs.server.balancer.TestBalancer.testBalancerWithPinnedBlocks(TestBalancer.java:515)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2020-12-02 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.balancer.TestBalancer 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   jshint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/diff-patch-jshint.txt
  [208K]

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/diff-compile-javac-root.txt
  [456K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/xml.txt
  [4.0K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [216K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [276K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/134/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [116K]