[jira] [Resolved] (HDFS-16429) Add DataSetLockManager to manage fine-grain locks for FsDataSetImpl

2022-01-27 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He resolved HDFS-16429.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk. Thanks [~Aiphag0] for your contributions!

> Add DataSetLockManager to manage fine-grain locks for FsDataSetImpl
> ---
>
> Key: HDFS-16429
> URL: https://issues.apache.org/jira/browse/HDFS-16429
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: hdfs
>Reporter: Mingxiang Li
>Assignee: Mingxiang Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> 1、Use lockManager to maintain two level lock for FsDataSetImpl.
> The simple lock model like this.Parts of implemented as follows
>  * As for finalizeReplica(),append(),createRbw()First get BlockPoolLock 
> read lock,and then get BlockPoolLock-volume-lock write lock.
>  * As for getStoredBlock(),getMetaDataInputStream()First get 
> BlockPoolLock read lock,and the then get BlockPoolLock-volume-lock read lock.
>  * As for deepCopyReplica(),getBlockReports() get the BlockPoolLock read lock.
>  * As for delete hold the BlockPoolLock write lock.
> 2、Make LightWeightResizableGSet become thread safe.It not become performance 
> bottleneck if we make it thread safe.We can reduce lock grain size for 
> ReplicaMap when make LightWeightResizableGSet thread safe.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2022-01-27 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/

[Jan 26, 2022 4:24:42 PM] (noreply) HADOOP-18093. Better exception handling for 
testFileStatusOnMountLink() in ViewFsBaseTest.java (#3918). Contributed by Xing 
Lin. (#3933)




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.io.compress.snappy.TestSnappyCompressorDecompressor 
   hadoop.fs.TestFileUtil 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/diff-compile-javac-root.txt
  [476K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/patch-mvnsite-root.txt
  [560K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/patch-javadoc-root.txt
  [40K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [224K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [424K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [112K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/555/a

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC3

2022-01-27 Thread Mukund Madhav Thakur
* +1 binding *

Checked out the RC tag and compiled successfully.  ( mvn clean package
-Pdist -DskipTests -Dtar -Dmaven.javadoc.skip=true)

Ran abfs and azure integration tests. All good.

Compiled gcs with 3.3.2 hadoop from staging successful.

Tested list, get, put , delete operation on s3, gcs and abfs buckets.



On Thu, Jan 27, 2022 at 12:46 AM Chao Sun  wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC3 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC3/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC3
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1333
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> The only delta between this and RC2 is the addition of the following fix:
>   - HADOOP-18094. Disable S3A auditing by default.
>
> I've done the same tests as in RC2 and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>


Re: [VOTE] Release Apache Hadoop 3.3.2 - RC3

2022-01-27 Thread Viraj Jasani
+1 (non-binding)

* Signature: ok
* Checksum: ok
* Build from source: ok
* Ran some large tests from hbase 2.5 branch against RC2: looks good (carry
forward from previous RC)
* HDFS functional tests: ok
* Ran a couple of MapReduce Jobs: ok
* ATSv2 functional tests: ok


On Thu, Jan 27, 2022 at 12:47 AM Chao Sun  wrote:

> Hi all,
>
> I've put together Hadoop 3.3.2 RC3 below:
>
> The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC3/
> The RC tag is at:
> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC3
> The Maven artifacts are staged at:
> https://repository.apache.org/content/repositories/orgapachehadoop-1333
>
> You can find my public key at:
> https://downloads.apache.org/hadoop/common/KEYS
>
> The only delta between this and RC2 is the addition of the following fix:
>   - HADOOP-18094. Disable S3A auditing by default.
>
> I've done the same tests as in RC2 and they look good:
> - Ran all the unit tests
> - Started a single node HDFS cluster and tested a few simple commands
> - Ran all the tests in Spark using the RC2 artifacts
>
> Please evaluate the RC and vote, thanks!
>
> Best,
> Chao
>


Re: [VOTE] Release Apache Hadoop 3.3.2 - RC3

2022-01-27 Thread Viraj Jasani
> * ATSv2 functional tests: ok

Ran against hbase 1.7 cluster

On Thu, Jan 27, 2022 at 5:03 PM Viraj Jasani  wrote:

> +1 (non-binding)
>
> * Signature: ok
> * Checksum: ok
> * Build from source: ok
> * Ran some large tests from hbase 2.5 branch against RC2: looks good
> (carry forward from previous RC)
> * HDFS functional tests: ok
> * Ran a couple of MapReduce Jobs: ok
> * ATSv2 functional tests: ok
>
>
> On Thu, Jan 27, 2022 at 12:47 AM Chao Sun  wrote:
>
>> Hi all,
>>
>> I've put together Hadoop 3.3.2 RC3 below:
>>
>> The RC is available at:
>> http://people.apache.org/~sunchao/hadoop-3.3.2-RC3/
>> The RC tag is at:
>> https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC3
>> The Maven artifacts are staged at:
>> https://repository.apache.org/content/repositories/orgapachehadoop-1333
>>
>> You can find my public key at:
>> https://downloads.apache.org/hadoop/common/KEYS
>>
>> The only delta between this and RC2 is the addition of the following fix:
>>   - HADOOP-18094. Disable S3A auditing by default.
>>
>> I've done the same tests as in RC2 and they look good:
>> - Ran all the unit tests
>> - Started a single node HDFS cluster and tested a few simple commands
>> - Ran all the tests in Spark using the RC2 artifacts
>>
>> Please evaluate the RC and vote, thanks!
>>
>> Best,
>> Chao
>>
>


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2022-01-27 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/763/

[Jan 26, 2022 3:24:16 AM] (noreply) HADOOP-18089. Test coverage for Async 
profiler servlets (#3913)
[Jan 26, 2022 8:24:09 AM] (noreply) HDFS-16398. Reconfig block report 
parameters for datanode (#3831)




-1 overall


The following subsystems voted -1:
blanks pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.yarn.server.router.clientrm.TestFederationClientInterceptor 
   hadoop.yarn.csi.client.TestCsiClient 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/763/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/763/artifact/out/results-compile-javac-root.txt
 [340K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/763/artifact/out/blanks-eol.txt
 [13M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/763/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/763/artifact/out/results-checkstyle-root.txt
 [14M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/763/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/763/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/763/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/763/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/763/artifact/out/results-javadoc-javadoc-root.txt
 [404K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/763/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt
 [28K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/763/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt
 [20K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-16442) TestBlockTokenWithShortCircuitRead.testShortCircuitReadWithInvalidToken fails

2022-01-27 Thread Kevin Wikant (Jira)
Kevin Wikant created HDFS-16442:
---

 Summary: 
TestBlockTokenWithShortCircuitRead.testShortCircuitReadWithInvalidToken fails
 Key: HDFS-16442
 URL: https://issues.apache.org/jira/browse/HDFS-16442
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs
Reporter: Kevin Wikant


[https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-3920/2/pipeline]

 

[https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3920/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt]

 

```
[ERROR] Failures: 
[ERROR]   
TestBlockTokenWithShortCircuitRead.testShortCircuitReadWithInvalidToken:153->checkSlotsAfterSSRWithTokenExpiration:178->checkShmAndSlots:184
 expected:<1> but was:<2>
[ERROR]   TestDirectoryScanner.testThrottling:727 Throttle is too permissive
[INFO] 
[ERROR] Tests run: 6208, Failures: 2, Errors: 0, Skipped: 22
```



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.2 - RC3

2022-01-27 Thread Masatake Iwasaki

Thanks for putting this up, Chao Sun.

I got following error on building the RC3 source tarball.
It is reproducible even in the container launched by `./start-build-env.sh`.
There seems to be no relevant diff between release-3.3.2-RC0 and 
release-3.3.2-RC3 (and trunk)
under hadoop-yarn-applications-catalog-webapp.

I guess developers having caches of related artifacts under ~/.m2 did not see 
this?

```
$ mvn clean install -DskipTests -Pnative -Pdist
...
[INFO] Installing node version v8.11.3
[INFO] Downloading 
https://nodejs.org/dist/v8.11.3/node-v8.11.3-linux-x64.tar.gz to 
/home/centos/.m2/repository/com/github/eirslett/node/8.11.3/node-8.11.3-linux-x64.tar.gz
[INFO] No proxies configured
[INFO] No proxy was configured, downloading directly
[INFO] Unpacking 
/home/centos/.m2/repository/com/github/eirslett/node/8.11.3/node-8.11.3-linux-x64.tar.gz
 into 
/home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/tmp
[INFO] Copying node binary from 
/home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/tmp/node-v8.11.3-linux-x64/bin/node
 to 
/home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/node
[INFO] Installed node locally.
[INFO] Installing Yarn version v1.7.0
[INFO] Downloading 
https://github.com/yarnpkg/yarn/releases/download/v1.7.0/yarn-v1.7.0.tar.gz to 
/home/centos/.m2/repository/com/github/eirslett/yarn/1.7.0/yarn-1.7.0.tar.gz
[INFO] No proxies configured
[INFO] No proxy was configured, downloading directly
[INFO] Unpacking 
/home/centos/.m2/repository/com/github/eirslett/yarn/1.7.0/yarn-1.7.0.tar.gz 
into 
/home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/yarn
[INFO] Installed Yarn locally.
[INFO]
[INFO] --- frontend-maven-plugin:1.11.2:yarn (yarn install) @ 
hadoop-yarn-applications-catalog-webapp ---
[INFO] testFailureIgnore property is ignored in non test phases
[INFO] Running 'yarn ' in 
/home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target
[INFO] yarn install v1.7.0
[INFO] info No lockfile found.
[INFO] [1/4] Resolving packages...
[INFO] [2/4] Fetching packages...
[INFO] error safe-stable-stringify@2.3.1: The engine "node" is incompatible with this 
module. Expected version ">=10".
[INFO] error safe-stable-stringify@2.3.1: The engine "node" is incompatible with this 
module. Expected version ">=10".info Visit https://yarnpkg.com/en/docs/cli/install for 
documentation about this command.
[INFO] error Found incompatible module
[INFO] 
```

Masatake Iwasaki


On 2022/01/27 4:16, Chao Sun wrote:

Hi all,

I've put together Hadoop 3.3.2 RC3 below:

The RC is available at: http://people.apache.org/~sunchao/hadoop-3.3.2-RC3/
The RC tag is at:
https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC3
The Maven artifacts are staged at:
https://repository.apache.org/content/repositories/orgapachehadoop-1333

You can find my public key at:
https://downloads.apache.org/hadoop/common/KEYS

The only delta between this and RC2 is the addition of the following fix:
   - HADOOP-18094. Disable S3A auditing by default.

I've done the same tests as in RC2 and they look good:
- Ran all the unit tests
- Started a single node HDFS cluster and tested a few simple commands
- Ran all the tests in Spark using the RC2 artifacts

Please evaluate the RC and vote, thanks!

Best,
Chao



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.2 - RC3

2022-01-27 Thread Akira Ajisaka
Hi Masatake,

I faced the same error in a clean environment and
https://issues.apache.org/jira/browse/YARN-10561 should fix this issue.
I'll rebase the patch shortly.

By the way, I'm afraid there is no active maintainer in
hadoop-yarn-applications-catalog module. The module is for a sample
application catalog, so I think we can move the module to a separate
repository. Of course, it should be discussed separately.

Thanks and regards,
Akira

On Fri, Jan 28, 2022 at 1:39 PM Masatake Iwasaki <
iwasak...@oss.nttdata.co.jp> wrote:

> Thanks for putting this up, Chao Sun.
>
> I got following error on building the RC3 source tarball.
> It is reproducible even in the container launched by
> `./start-build-env.sh`.
> There seems to be no relevant diff between release-3.3.2-RC0 and
> release-3.3.2-RC3 (and trunk)
> under hadoop-yarn-applications-catalog-webapp.
>
> I guess developers having caches of related artifacts under ~/.m2 did not
> see this?
>
> ```
> $ mvn clean install -DskipTests -Pnative -Pdist
> ...
> [INFO] Installing node version v8.11.3
> [INFO] Downloading
> https://nodejs.org/dist/v8.11.3/node-v8.11.3-linux-x64.tar.gz to
> /home/centos/.m2/repository/com/github/eirslett/node/8.11.3/node-8.11.3-linux-x64.tar.gz
> [INFO] No proxies configured
> [INFO] No proxy was configured, downloading directly
> [INFO] Unpacking
> /home/centos/.m2/repository/com/github/eirslett/node/8.11.3/node-8.11.3-linux-x64.tar.gz
> into
> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/tmp
> [INFO] Copying node binary from
> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/tmp/node-v8.11.3-linux-x64/bin/node
> to
> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/node
> [INFO] Installed node locally.
> [INFO] Installing Yarn version v1.7.0
> [INFO] Downloading
> https://github.com/yarnpkg/yarn/releases/download/v1.7.0/yarn-v1.7.0.tar.gz
> to
> /home/centos/.m2/repository/com/github/eirslett/yarn/1.7.0/yarn-1.7.0.tar.gz
> [INFO] No proxies configured
> [INFO] No proxy was configured, downloading directly
> [INFO] Unpacking
> /home/centos/.m2/repository/com/github/eirslett/yarn/1.7.0/yarn-1.7.0.tar.gz
> into
> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target/node/yarn
> [INFO] Installed Yarn locally.
> [INFO]
> [INFO] --- frontend-maven-plugin:1.11.2:yarn (yarn install) @
> hadoop-yarn-applications-catalog-webapp ---
> [INFO] testFailureIgnore property is ignored in non test phases
> [INFO] Running 'yarn ' in
> /home/centos/srcs/hadoop-3.3.2-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/target
> [INFO] yarn install v1.7.0
> [INFO] info No lockfile found.
> [INFO] [1/4] Resolving packages...
> [INFO] [2/4] Fetching packages...
> [INFO] error safe-stable-stringify@2.3.1: The engine "node" is
> incompatible with this module. Expected version ">=10".
> [INFO] error safe-stable-stringify@2.3.1: The engine "node" is
> incompatible with this module. Expected version ">=10".info Visit
> https://yarnpkg.com/en/docs/cli/install for documentation about this
> command.
> [INFO] error Found incompatible module
> [INFO]
> 
> ```
>
> Masatake Iwasaki
>
>
> On 2022/01/27 4:16, Chao Sun wrote:
> > Hi all,
> >
> > I've put together Hadoop 3.3.2 RC3 below:
> >
> > The RC is available at:
> http://people.apache.org/~sunchao/hadoop-3.3.2-RC3/
> > The RC tag is at:
> > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC3
> > The Maven artifacts are staged at:
> > https://repository.apache.org/content/repositories/orgapachehadoop-1333
> >
> > You can find my public key at:
> > https://downloads.apache.org/hadoop/common/KEYS
> >
> > The only delta between this and RC2 is the addition of the following fix:
> >- HADOOP-18094. Disable S3A auditing by default.
> >
> > I've done the same tests as in RC2 and they look good:
> > - Ran all the unit tests
> > - Started a single node HDFS cluster and tested a few simple commands
> > - Ran all the tests in Spark using the RC2 artifacts
> >
> > Please evaluate the RC and vote, thanks!
> >
> > Best,
> > Chao
> >
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2022-01-27 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/249/

[Jan 25, 2022 7:26:18 AM] (noreply) HDFS-16428. Source path with storagePolicy 
cause wrong typeConsumed while rename (#3898). Contributed by lei w.
[Jan 25, 2022 11:42:35 AM] (noreply) HDFS-16262. Async refresh of cached 
locations in DFSInputStream (#3527)
[Jan 25, 2022 1:51:17 PM] (noreply) HDFS-16401.Remove the worthless 
DatasetVolumeChecker#numAsyncDatasetChecks. (#3838)
[Jan 25, 2022 2:10:18 PM] (noreply) HADOOP-18093. Better exception handling for 
testFileStatusOnMountLink() in ViewFsBaseTest.java (#3918). Contributed by Xing 
Lin.
[Jan 25, 2022 5:25:18 PM] (noreply) YARN-11034. Add enhanced headroom in 
AllocateResponse (#3766)
[Jan 26, 2022 3:24:16 AM] (noreply) HADOOP-18089. Test coverage for Async 
profiler servlets (#3913)
[Jan 26, 2022 8:24:09 AM] (noreply) HDFS-16398. Reconfig block report 
parameters for datanode (#3831)
[Jan 27, 2022 4:42:44 AM] (noreply) HDFS-16427. Add debug log for 
BlockManager#chooseExcessRedundancyStriped (#3888)




-1 overall


The following subsystems voted -1:
blanks mvnsite pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 695] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:[line 138] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:[line 75] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:[line 85] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:[line 130] 
   
org.apache.hadoop.hdfs.server.