[jira] [Resolved] (HDFS-16423) balancer should not get blocks on stale storages

2022-01-25 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-16423.

Fix Version/s: 3.3.3
   Resolution: Fixed

> balancer should not get blocks on stale storages
> 
>
> Key: HDFS-16423
> URL: https://issues.apache.org/jira/browse/HDFS-16423
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: balancer & mover
>Reporter: qinyuren
>Assignee: qinyuren
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.3
>
> Attachments: image-2022-01-13-17-18-32-409.png
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> We have met a problems as described in HDFS-16420
> We found that balancer copied a block multi times without deleting the source 
> block if this block was placed in a stale storage. And resulting a block with 
> many copies, but these redundant copies are not deleted until the storage 
> become not stale.
>  
> !image-2022-01-13-17-18-32-409.png|width=657,height=275!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2022-01-25 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/248/

[Jan 24, 2022 6:26:30 AM] (noreply) HDFS-16402. Improve HeartbeatManager logic 
to avoid incorrect stats. (#3839). Contributed by tomscut.
[Jan 24, 2022 6:34:26 AM] (noreply) HDFS-16430. Add validation to maximum 
blocks in EC group when adding an EC policy (#3899). Contributed by daimin.
[Jan 24, 2022 12:04:58 PM] (Akira Ajisaka) HADOOP-17593. hadoop-huaweicloud and 
hadoop-cloud-storage to remove log4j as transitive dependency
[Jan 24, 2022 1:37:33 PM] (noreply) HADOOP-18094. Disable S3A auditing by 
default.
[Jan 24, 2022 4:03:36 PM] (noreply) YARN-11015. Decouple queue capacity with 
ability to run OPPORTUNISTIC container (#3779)
[Jan 25, 2022 5:02:37 AM] (noreply) HDFS-16403. Improve FUSE IO performance by 
supporting FUSE parameter max_background (#3842)




-1 overall


The following subsystems voted -1:
blanks mvnsite pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 695] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:[line 138] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:[line 75] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:[line 85] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:[line 130] 
   
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts
 doesn't override java.util.ArrayList.equals(Object) At 
RollingWindowManager.java:At RollingWindowManager.java:[line 1] 

spotbugs :

  

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2022-01-25 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/761/

[Jan 24, 2022 6:26:30 AM] (noreply) HDFS-16402. Improve HeartbeatManager logic 
to avoid incorrect stats. (#3839). Contributed by tomscut.
[Jan 24, 2022 6:34:26 AM] (noreply) HDFS-16430. Add validation to maximum 
blocks in EC group when adding an EC policy (#3899). Contributed by daimin.
[Jan 24, 2022 12:04:58 PM] (Akira Ajisaka) HADOOP-17593. hadoop-huaweicloud and 
hadoop-cloud-storage to remove log4j as transitive dependency
[Jan 24, 2022 1:37:33 PM] (noreply) HADOOP-18094. Disable S3A auditing by 
default.
[Jan 24, 2022 4:03:36 PM] (noreply) YARN-11015. Decouple queue capacity with 
ability to run OPPORTUNISTIC container (#3779)




-1 overall


The following subsystems voted -1:
blanks pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.yarn.csi.client.TestCsiClient 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/761/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/761/artifact/out/results-compile-javac-root.txt
 [340K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/761/artifact/out/blanks-eol.txt
 [13M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/761/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/761/artifact/out/results-checkstyle-root.txt
 [14M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/761/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/761/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/761/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/761/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/761/artifact/out/results-javadoc-javadoc-root.txt
 [404K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/761/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt
 [20K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

2022-01-25 Thread Chao Sun
Thanks all!! I'll prepare RC3 now including HADOOP-18094
 and will start a new
vote soon.

Best,
Chao

On Tue, Jan 25, 2022 at 2:23 AM Steve Loughran 
wrote:

> that error
> org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
> http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet
>
> implies maven is not downloading http artifacts, and it had decided that
> the reslet artifacts were coming off an http repo, even though its in maven
> central
>
> which means look at your global maven settings
>
>
>
> On Tue, 25 Jan 2022 at 07:27, Mukund Madhav Thakur
>  wrote:
>
> > Hi Chao,
> > I was using the command "mvn package -Pdist -DskipTests -Dtar
> > -Dmaven.javadoc.skip=true" on commit id *6da346a358c. *
> > It is working for me today. So maybe it was an intermittent issue in my
> > local last time when I was trying this. So we can ignore this. Thanks
> >
> >
> >
> > On Tue, Jan 25, 2022 at 6:21 AM Stack  wrote:
> >
> > > +1 (binding)
> > >
> > > * Signature: ok
> > > * Checksum : ok
> > > * Rat check (1.8.0_191): ok
> > >  - mvn clean apache-rat:check
> > > * Built from source (1.8.0_191): ok
> > >  - mvn clean install  -DskipTests
> > >
> > > Poking around in the binary, it looks good. Unpacked site. Looks right.
> > > Checked a few links work.
> > >
> > > Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours
> > w/
> > > chaos. Worked like 3.3.1...
> > >
> > > I tried to build with 3.8.1 maven and got the below.
> > >
> > > [ERROR] Failed to execute goal on project
> > > hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies
> > for
> > > project
> > > org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2:
> > Failed
> > > to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
> > > org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact
> descriptor
> > > for org.restlet.
> > > jee:org.restlet:jar:2.3.0: Could not transfer artifact
> > > org.restlet.jee:org.restlet:pom:2.3.0 from/to
> maven-default-http-blocker
> > (
> > > http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
> > > http://maven.restlet.org, default, releases+snapshots),
> > apache.snapshots (
> > > http://repository.apache.org/snapshots, default, disabled)] -> [Help
> 1]
> > >
> > > I used 3.6.3 mvn instead (looks like a simple fix).
> > >
> > > Thanks for packaging up this fat point release Chao Sun.
> > >
> > > S
> > >
> > > On Wed, Jan 19, 2022 at 9:50 AM Chao Sun  wrote:
> > >
> > > > Hi all,
> > > >
> > > > I've put together Hadoop 3.3.2 RC2 below:
> > > >
> > > > The RC is available at:
> > > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > > The RC tag is at:
> > > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > > The Maven artifacts are staged at:
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > > >
> > > > You can find my public key at:
> > > > https://downloads.apache.org/hadoop/common/KEYS
> > > >
> > > > I've done the following tests and they look good:
> > > > - Ran all the unit tests
> > > > - Started a single node HDFS cluster and tested a few simple commands
> > > > - Ran all the tests in Spark using the RC2 artifacts
> > > >
> > > > Please evaluate the RC and vote, thanks!
> > > >
> > > > Best,
> > > > Chao
> > > >
> > >
> >
>


[jira] [Created] (HDFS-16438) Avoid holding read locks for a long time when scanDatanodeStorage

2022-01-25 Thread tomscut (Jira)
tomscut created HDFS-16438:
--

 Summary: Avoid holding read locks for a long time when 
scanDatanodeStorage
 Key: HDFS-16438
 URL: https://issues.apache.org/jira/browse/HDFS-16438
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: tomscut
Assignee: tomscut
 Attachments: image-2022-01-25-23-18-30-275.png

At the time of decommission, if use {*}DatanodeAdminBackoffMonitor{*}, there 
will be a heavy operation: {*}scanDatanodeStorage{*}. If the number of blocks 
on a storage is large(more than 5 million), and GC performance is also poor, it 
may hold *read lock* for a long time, we should optimize it.

 

!image-2022-01-25-23-18-30-275.png|width=764,height=193!

 
{code:java}
2021-12-22 07:49:01,279 INFO  namenode.FSNamesystem 
(FSNamesystemLock.java:readUnlock(220)) - FSNamesystem scanDatanodeStorage read 
lock held for 5491 ms via
java.lang.Thread.getStackTrace(Thread.java:1552)
org.apache.hadoop.util.StringUtils.getStackTrace(StringUtils.java:1032)
org.apache.hadoop.hdfs.server.namenode.FSNamesystemLock.readUnlock(FSNamesystemLock.java:222)
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.readUnlock(FSNamesystem.java:1641)
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminBackoffMonitor.scanDatanodeStorage(DatanodeAdminBackoffMonitor.java:646)
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminBackoffMonitor.checkForCompletedNodes(DatanodeAdminBackoffMonitor.java:417)
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminBackoffMonitor.check(DatanodeAdminBackoffMonitor.java:300)
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeAdminBackoffMonitor.run(DatanodeAdminBackoffMonitor.java:201)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
    Number of suppressed read-lock reports: 0
    Longest read-lock held interval: 5491 {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16401) Remove the worthless DatasetVolumeChecker#numAsyncDatasetChecks

2022-01-25 Thread Hui Fei (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hui Fei resolved HDFS-16401.

Fix Version/s: 3.4.0
   Resolution: Fixed

> Remove the worthless DatasetVolumeChecker#numAsyncDatasetChecks
> ---
>
> Key: HDFS-16401
> URL: https://issues.apache.org/jira/browse/HDFS-16401
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 3.4.0
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> As early as HDFS-11279, DataNode#checkDiskErrorAsync() has been cleaned up,
> It seems to have neglected to clean up 
> DatasetVolumeChecker#numAsyncDatasetChecks together.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16262) Async refresh of cached locations in DFSInputStream

2022-01-25 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell resolved HDFS-16262.
--
Resolution: Fixed

I have committed this to trunk and branch-3.3. There are conflicts trying to 
take it to branch 3.2. If you want it on branch-3.2, please create another PR 
(we can re-use this Jira) against branch-3.2 so we get the CI checks to run.

> Async refresh of cached locations in DFSInputStream
> ---
>
> Key: HDFS-16262
> URL: https://issues.apache.org/jira/browse/HDFS-16262
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Bryan Beaudreault
>Assignee: Bryan Beaudreault
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.3
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> HDFS-15119 added the ability to invalidate cached block locations in 
> DFSInputStream. As written, the feature will affect all DFSInputStreams 
> regardless of whether they need it or not. The invalidation also only applies 
> on the next request, so the next request will pay the cost of calling 
> openInfo before reading the data.
> I'm working on a feature for HBase which enables efficient healing of 
> locality through Balancer-style low level block moves (HBASE-26250). I'd like 
> to utilize the idea started in HDFS-15119 in order to update DFSInputStreams 
> after blocks have been moved to local hosts.
> I was considering using the feature as is, but some of our clusters are quite 
> large and I'm concerned about the impact on the namenode:
>  * We have some clusters with over 350k StoreFiles, so that'd be 350k 
> DFSInputStreams. With such a large number and very active usage, having the 
> refresh be in-line makes it too hard to ensure we don't DDOS the NameNode.
>  * Currently we need to pay the price of openInfo the next time a 
> DFSInputStream is invoked. Moving that async would minimize the latency hit. 
> Also, some StoreFiles might be far less frequently accessed, so they may live 
> on for a long time before ever refreshing. We'd like to be able to know that 
> all DFSInputStreams are refreshed by a given time.
>  * We may have 350k files, but only a small percentage of them are ever 
> non-local at a given time. Refreshing only if necessary will save a lot of 
> work.
> In order to make this as painless to end users as possible, I'd like to:
>  * Update the implementation to utilize an async thread for managing 
> refreshes. This will give more control over rate limiting across all 
> DFSInputStreams in a DFSClient, and also ensure that all DFSInputStreams are 
> refreshed.
>  * Only refresh files which are lacking a local replica or have known 
> deadNodes to be cleaned up
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.2 - RC2

2022-01-25 Thread Steve Loughran
that error
org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker (
http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet

implies maven is not downloading http artifacts, and it had decided that
the reslet artifacts were coming off an http repo, even though its in maven
central

which means look at your global maven settings



On Tue, 25 Jan 2022 at 07:27, Mukund Madhav Thakur
 wrote:

> Hi Chao,
> I was using the command "mvn package -Pdist -DskipTests -Dtar
> -Dmaven.javadoc.skip=true" on commit id *6da346a358c. *
> It is working for me today. So maybe it was an intermittent issue in my
> local last time when I was trying this. So we can ignore this. Thanks
>
>
>
> On Tue, Jan 25, 2022 at 6:21 AM Stack  wrote:
>
> > +1 (binding)
> >
> > * Signature: ok
> > * Checksum : ok
> > * Rat check (1.8.0_191): ok
> >  - mvn clean apache-rat:check
> > * Built from source (1.8.0_191): ok
> >  - mvn clean install  -DskipTests
> >
> > Poking around in the binary, it looks good. Unpacked site. Looks right.
> > Checked a few links work.
> >
> > Deployed over ten node cluster. Ran HBase ITBLL over it for a few hours
> w/
> > chaos. Worked like 3.3.1...
> >
> > I tried to build with 3.8.1 maven and got the below.
> >
> > [ERROR] Failed to execute goal on project
> > hadoop-yarn-applications-catalog-webapp: Could not resolve dependencies
> for
> > project
> > org.apache.hadoop:hadoop-yarn-applications-catalog-webapp:war:3.3.2:
> Failed
> > to collect dependencies at org.apache.solr:solr-core:jar:7.7.0 ->
> > org.restlet.jee:org.restlet:jar:2.3.0: Failed to read artifact descriptor
> > for org.restlet.
> > jee:org.restlet:jar:2.3.0: Could not transfer artifact
> > org.restlet.jee:org.restlet:pom:2.3.0 from/to maven-default-http-blocker
> (
> > http://0.0.0.0/): Blocked mirror for repositories: [maven-restlet (
> > http://maven.restlet.org, default, releases+snapshots),
> apache.snapshots (
> > http://repository.apache.org/snapshots, default, disabled)] -> [Help 1]
> >
> > I used 3.6.3 mvn instead (looks like a simple fix).
> >
> > Thanks for packaging up this fat point release Chao Sun.
> >
> > S
> >
> > On Wed, Jan 19, 2022 at 9:50 AM Chao Sun  wrote:
> >
> > > Hi all,
> > >
> > > I've put together Hadoop 3.3.2 RC2 below:
> > >
> > > The RC is available at:
> > > http://people.apache.org/~sunchao/hadoop-3.3.2-RC2/
> > > The RC tag is at:
> > > https://github.com/apache/hadoop/releases/tag/release-3.3.2-RC2
> > > The Maven artifacts are staged at:
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1332
> > >
> > > You can find my public key at:
> > > https://downloads.apache.org/hadoop/common/KEYS
> > >
> > > I've done the following tests and they look good:
> > > - Ran all the unit tests
> > > - Started a single node HDFS cluster and tested a few simple commands
> > > - Ran all the tests in Spark using the RC2 artifacts
> > >
> > > Please evaluate the RC and vote, thanks!
> > >
> > > Best,
> > > Chao
> > >
> >
>


Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2022-01-25 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.io.compress.snappy.TestSnappyCompressorDecompressor 
   hadoop.fs.TestFileUtil 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.tools.TestDistCpSystem 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/diff-compile-javac-root.txt
  [476K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/patch-mvnsite-root.txt
  [560K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/patch-javadoc-root.txt
  [40K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [224K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [424K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [112K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt
  [24K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/553/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt
  [28K]
   
https://ci-hadoop.apache.org/job/