Fwd: Fw:Re: [VOTE] Release Apache Hadoop 3.4.0 RC0

2024-01-11 Thread slfan1989
Thank you very much for your suggestions! I will continue to improve the
RC0 version.

Best Regards,
Shilun Fan.

Original

From:"Masatake Iwasaki"< iwasak...@oss.nttdata.com >;

Date:2024/1/11 13:45

To:"common-dev"< common-dev@hadoop.apache.org >;"hdfs-dev"<
hdfs-...@hadoop.apache.org >;"yarn-dev"< yarn-...@hadoop.apache.org >;
"mapreduce-dev"< mapreduce-...@hadoop.apache.org >;

CC:"private"< priv...@hadoop.apache.org >;

Subject:Re: [VOTE] Release Apache Hadoop 3.4.0 RC0

Thanks for driving this release, Shilun Fan.

The top page of site documentation (in hadoop-3.4.0-RC0-site.tar.gz)
looks the same as 3.3.5.

While the index.md.vm is updated in branch-3.4.0[1], it seems not to be
reflected.
release-3.4.0-RC0 tag should be pushed to make checking easier.

In addition, the description about new features of previous release
should be removed from the index.md.vm.

[1]
https://github.com/apache/hadoop/blob/branch-3.4.0/hadoop-project/src/site/markdown/index.md.vm

Masatake Iwasaki

On 2024/01/11 14:15, slfan1989 wrote:
> Hello all,
>
> We plan to release hadoop 3.4.0 based on hadoop trunk, which is the first
> hadoop 3.4.0-RC version.
>
> The RC is available at:
> https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/ (for amd64)
> https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-arm64/ (for arm64)
>
> Maven artifacts is built by x86 machine and are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1391/
>
> My public key:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Changelog:
> https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/CHANGELOG.md
>
> Release notes:
> https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/RELEASENOTES.md
>
> This is a relatively big release (by Hadoop standard) containing about
2852
> commits.
>
> Please give it a try, this RC vote will run for 7 days.
>
> Feature highlights:
>
> DataNode FsDatasetImpl Fine-Grained Locking via BlockPool
> 
> [HDFS-15180](https://issues.apache.org/jira/browse/HDFS-15180) Split
> FsDatasetImpl datasetLock via blockpool to solve the issue of heavy
> FsDatasetImpl datasetLock
> When there are many namespaces in a large cluster.
>
> YARN Federation improvements
> 
> [YARN-5597](https://issues.apache.org/jira/browse/YARN-5597) brings many
> improvements, including the following:
>
> 1. YARN Router now boasts a full implementation of all relevant interfaces
> including the ApplicationClientProtocol,
> ResourceManagerAdministrationProtocol, and RMWebServiceProtocol.
> 2. Enhanced support for Application cleanup and automatic offline
> mechanisms for SubCluster are now facilitated by the YARN Router.
> 3. Code optimization for Router and AMRMProxy was undertaken, coupled with
> improvements to previously pending functionalities.
> 4. Audit logs and Metrics for Router received upgrades.
> 5. A boost in cluster security features was achieved, with the inclusion
of
> Kerberos support.
> 6. The page function of the router has been enhanced.
>
> Upgrade AWS SDK to V2
> 
> [HADOOP-18073](https://issues.apache.org/jira/browse/HADOOP-18073)
> The S3A connector now uses the V2 AWS SDK. This is a significant change at
> the source code level.
> Any applications using the internal extension/override points in the
> filesystem connector are likely to break.
> Consult the document aws\_sdk\_upgrade for the full details.
>
> hadoop-thirdparty will also provide the new RC0 soon.
>
> Best Regards,
> Shilun Fan.
>

-
To unsubscribe, e-mail: private-unsubscr...@hadoop.apache.org
For additional commands, e-mail: private-h...@hadoop.apache.org


Re: [VOTE] Release Apache Hadoop 3.4.0 RC0

2024-01-11 Thread slfan1989
Thank you very much for your help in verifying this version! We will use
version 3.5.0 for fix jira in the future.

Best Regards,
Shilun Fan.

 > wonderful! I'll be testing over the weekend

 > Meanwhile, new changes I'm putting in to trunk are tagged as fixed in
3.5.0
 > -correct?

 > steve


> On Thu, 11 Jan 2024 at 05:15, slfan1989 wrote:

> Hello all,
>
> We plan to release hadoop 3.4.0 based on hadoop trunk, which is the first
> hadoop 3.4.0-RC version.
>
> The RC is available at:
> https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/ (for amd64)
> https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-arm64/ (for arm64)
>
> Maven artifacts is built by x86 machine and are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1391/
>
> My public key:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Changelog:
> https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/CHANGELOG.md
>
> Release notes:
> https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/RELEASENOTES.md
>
> This is a relatively big release (by Hadoop standard) containing about
2852
> commits.
>
> Please give it a try, this RC vote will run for 7 days.
>
> Feature highlights:
>
> DataNode FsDatasetImpl Fine-Grained Locking via BlockPool
> 
> [HDFS-15180](https://issues.apache.org/jira/browse/HDFS-15180) Split
> FsDatasetImpl datasetLock via blockpool to solve the issue of heavy
> FsDatasetImpl datasetLock
> When there are many namespaces in a large cluster.
>
> YARN Federation improvements
> 
> [YARN-5597](https://issues.apache.org/jira/browse/YARN-5597) brings many
> improvements, including the following:
>
> 1. YARN Router now boasts a full implementation of all relevant interfaces
> including the ApplicationClientProtocol,
> ResourceManagerAdministrationProtocol, and RMWebServiceProtocol.
> 2. Enhanced support for Application cleanup and automatic offline
> mechanisms for SubCluster are now facilitated by the YARN Router.
> 3. Code optimization for Router and AMRMProxy was undertaken, coupled with
> improvements to previously pending functionalities.
> 4. Audit logs and Metrics for Router received upgrades.
> 5. A boost in cluster security features was achieved, with the inclusion
of
> Kerberos support.
> 6. The page function of the router has been enhanced.
>
> Upgrade AWS SDK to V2
> 
> [HADOOP-18073](https://issues.apache.org/jira/browse/HADOOP-18073)
> The S3A connector now uses the V2 AWS SDK. This is a significant change at
> the source code level.
> Any applications using the internal extension/override points in the
> filesystem connector are likely to break.
> Consult the document aws\_sdk\_upgrade for the full details.
>
> hadoop-thirdparty will also provide the new RC0 soon.
>
> Best Regards,
> Shilun Fan.
>


Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2024-01-11 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/612/

No changes




-1 overall


The following subsystems voted -1:
blanks hadolint mvnsite pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 695] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:[line 138] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:[line 75] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:[line 85] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:[line 130] 
   
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts
 doesn't override java.util.ArrayList.equals(Object) At 
RollingWindowManager.java:At RollingWindowManager.java:[line 1] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT 
isn't final but should be At TimelineConnector.java:be At 
TimelineConnector.java:[line 82] 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 

Apache Hadoop qbt Report: branch-3.3+JDK8 on Linux/x86_64

2024-01-11 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/143/

[Jan 5, 2024, 4:50:35 PM] (Takanobu Asanuma) HDFS-17315. Optimize the namenode 
format code logic. (#6400)




-1 overall


The following subsystems voted -1:
blanks pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/143/artifact/out/results-compile-cc-root.txt
 [48K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/143/artifact/out/results-compile-javac-root.txt
 [364K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/143/artifact/out/blanks-eol.txt
 [15M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/143/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/143/artifact/out/results-checkstyle-root.txt
 [14M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/143/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/143/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/143/artifact/out/results-shellcheck.txt
 [20K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/143/artifact/out/xml.txt
 [28K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/143/artifact/out/results-javadoc-javadoc-root.txt
 [972K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/143/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 [748K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-3.3-java8-linux-x86_64/143/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
 [96K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

[jira] [Created] (HADOOP-19035) CrcUtil/CrcComposer should not throw IOException for non-IO

2024-01-11 Thread Tsz-wo Sze (Jira)
Tsz-wo Sze created HADOOP-19035:
---

 Summary: CrcUtil/CrcComposer should not throw IOException for 
non-IO
 Key: HADOOP-19035
 URL: https://issues.apache.org/jira/browse/HADOOP-19035
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz-wo Sze
Assignee: Tsz-wo Sze


CrcUtil and CrcComposer should throw specific exceptions for non-IO cases
- IllegalArgumentException: invalid arguments
- ArrayIndexOutOfBoundsException: index exceeds array size
- IllegalStateException: unexpected computation state



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2024-01-11 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/

No changes




-1 overall


The following subsystems voted -1:
blanks hadolint pathlen spotbugs xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/resources/xml/external-dtd.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT 
isn't final but should be At TimelineConnector.java:be At 
TimelineConnector.java:[line 82] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
   
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT 
isn't final but should be At TimelineConnector.java:be At 
TimelineConnector.java:[line 82] 

spotbugs :

   module:hadoop-yarn-project 
   
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT 
isn't final but should be At TimelineConnector.java:be At 
TimelineConnector.java:[line 82] 

spotbugs :

   module:root 
   
org.apache.hadoop.yarn.client.api.impl.TimelineConnector.DEFAULT_SOCKET_TIMEOUT 
isn't final but should be At TimelineConnector.java:be At 
TimelineConnector.java:[line 82] 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-compile-javac-root.txt
 [12K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/blanks-eol.txt
 [15M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-checkstyle-root.txt
 [13M]

   hadolint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-hadolint.txt
 [24K]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-shellcheck.txt
 [24K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/results-javadoc-javadoc-root.txt
 [244K]

   spotbugs:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html
 [12K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common-warnings.html
 [8.0K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/branch-spotbugs-hadoop-yarn-project-warnings.html
 [12K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1468/artifact/out/branch-spotbugs-root-warnings.html
 [20K]

Powered by Apache Yetus 0.14.0-SNAPSHOT   https://yetus.apache.org

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 3.4.0 RC0

2024-01-11 Thread Steve Loughran
wonderful! I'll be testing over the weekend

Meanwhile, new changes I'm putting in to trunk are tagged as fixed in 3.5.0
-correct?

steve


On Thu, 11 Jan 2024 at 05:15, slfan1989  wrote:

> Hello all,
>
> We plan to release hadoop 3.4.0 based on hadoop trunk, which is the first
> hadoop 3.4.0-RC version.
>
> The RC is available at:
> https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/ (for amd64)
> https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-arm64/ (for arm64)
>
> Maven artifacts is built by x86 machine and are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1391/
>
> My public key:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> Changelog:
> https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/CHANGELOG.md
>
> Release notes:
> https://home.apache.org/~slfan1989/hadoop-3.4.0-RC0-amd64/RELEASENOTES.md
>
> This is a relatively big release (by Hadoop standard) containing about 2852
> commits.
>
> Please give it a try, this RC vote will run for 7 days.
>
> Feature highlights:
>
> DataNode FsDatasetImpl Fine-Grained Locking via BlockPool
> 
> [HDFS-15180](https://issues.apache.org/jira/browse/HDFS-15180) Split
> FsDatasetImpl datasetLock via blockpool to solve the issue of heavy
> FsDatasetImpl datasetLock
> When there are many namespaces in a large cluster.
>
> YARN Federation improvements
> 
> [YARN-5597](https://issues.apache.org/jira/browse/YARN-5597) brings many
> improvements, including the following:
>
> 1. YARN Router now boasts a full implementation of all relevant interfaces
> including the ApplicationClientProtocol,
> ResourceManagerAdministrationProtocol, and RMWebServiceProtocol.
> 2. Enhanced support for Application cleanup and automatic offline
> mechanisms for SubCluster are now facilitated by the YARN Router.
> 3. Code optimization for Router and AMRMProxy was undertaken, coupled with
> improvements to previously pending functionalities.
> 4. Audit logs and Metrics for Router received upgrades.
> 5. A boost in cluster security features was achieved, with the inclusion of
> Kerberos support.
> 6. The page function of the router has been enhanced.
>
> Upgrade AWS SDK to V2
> 
> [HADOOP-18073](https://issues.apache.org/jira/browse/HADOOP-18073)
> The S3A connector now uses the V2 AWS SDK.  This is a significant change at
> the source code level.
> Any applications using the internal extension/override points in the
> filesystem connector are likely to break.
> Consult the document aws\_sdk\_upgrade for the full details.
>
> hadoop-thirdparty will also provide the new RC0 soon.
>
> Best Regards,
> Shilun Fan.
>


[jira] [Resolved] (HADOOP-19004) S3A: Support Authentication through HttpSigner API

2024-01-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-19004.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> S3A: Support Authentication through HttpSigner API 
> ---
>
> Key: HADOOP-19004
> URL: https://issues.apache.org/jira/browse/HADOOP-19004
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Harshit Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> The latest AWS SDK changes how signing works, and for signing S3Express 
> signatures the new {{software.amazon.awssdk.http.auth}} auth mechanism is 
> needed



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18981) Move oncrpc/portmap from hadoop-nfs to hadoop-common

2024-01-11 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18981.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Move oncrpc/portmap from hadoop-nfs to hadoop-common
> 
>
> Key: HADOOP-18981
> URL: https://issues.apache.org/jira/browse/HADOOP-18981
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Xing Lin
>Assignee: Xing Lin
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> We want to use udpserver/client for other use cases, rather than only for 
> NFS. One such use case is to export NameNodeHAState for NameNodes via a UDP 
> server. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19034) Fix Download Maven Url Not Found

2024-01-11 Thread Shilun Fan (Jira)
Shilun Fan created HADOOP-19034:
---

 Summary: Fix Download Maven Url Not Found
 Key: HADOOP-19034
 URL: https://issues.apache.org/jira/browse/HADOOP-19034
 Project: Hadoop Common
  Issue Type: Improvement
  Components: common
Affects Versions: 3.4.0
Reporter: Shilun Fan
Assignee: Shilun Fan


In the process of preparing Hadoop 3.4.0 Release, I found that when opening the 
link to download maven, it will prompt not found. We need to fix this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2024-01-11 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.TestLeaseRecovery2 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion 
   hadoop.hdfs.TestFileLengthOnClusterRestart 
   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.fs.viewfs.TestViewFileSystemHdfs 
   hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.yarn.sls.TestSLSRunner 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator
 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/diff-compile-javac-root.txt
  [488K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-mvnsite-root.txt
  [572K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-javadoc-root.txt
  [36K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [220K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [1.8M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1268/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [20K]