Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2024-02-23 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.mapreduce.v2.app.TestRuntimeEstimators 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapred.TestLineRecordReader 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.yarn.client.api.impl.TestAMRMProxy 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceAllocator
 
   
hadoop.yarn.server.nodemanager.containermanager.linux.resources.TestNumaResourceHandlerImpl
 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.yarn.server.resourcemanager.recovery.TestFSRMStateStore 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/diff-compile-javac-root.txt
  [508K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/patch-mvnsite-root.txt
  [592K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/patch-javadoc-root.txt
  [36K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [244K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [448K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt
  [44K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/1312/artifact/out/patch-unit-hadoop-tools_hadoop-az

Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2024-02-23 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/634/

[Feb 22, 2024, 5:09:46 PM] (github) HADOOP-19065. Update Protocol Buffers 
installation to 3.21.12 (#6526)
[Feb 22, 2024, 5:49:37 PM] (github) HADOOP-18910: [ABFS] Adding Support for MD5 
Hash based integrity verification of the request content during transport 
(#6069)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 3.4.0 (RC2)

2024-02-23 Thread slfan1989
Thank you very much for Steve's detailed test report and issue description!

 I appreciate your time spent helping with validation. I am currently
trying to use hadoop-release-support to prepare hadoop-3.4.0-RC3.

After completing the hadoop-3.4.0 version, I will document some of the
issues encountered in the "how to release" document, so that future members
can refer to it during the release process.

Once again, thank you to all members involved in the hadoop-3.4.0 release.

Let's hope for a smooth release process.

Best Regards,
Shilun Fan.

On Sat, Feb 24, 2024 at 2:29 AM Steve Loughran 
wrote:

> I have been testing this all week, and a -1 until some very minor changes
> go in.
>
>
>1. build the arm64 binaries with the same jar artifacts as the x86 one
>2. include ad8b6541117b HADOOP-18088. Replace log4j 1.x with reload4j.
>3. include 80b4bb68159c HADOOP-19084. Prune hadoop-common transitive
>dependencies
>
>
> For #1 we have automation there in my client-validator module, which I have
> moved to be a hadoop-managed project and tried to make more
> manageable
> https://github.com/apache/hadoop-release-support
>
> This contains an ant project to perform a lot of the documented build
> stages, including using SCP to copy down an x86 release tarball and make a
> signed copy of this containing (locally built) arm artifacts.
>
> Although that only works with my development environment (macbook m1 laptop
> and remote ec2 server), it should be straightforward to make it more
> flexible.
>
> It also includes and tests a maven project which imports many of the
> hadoop-* pom files and run some test with it; this caught some problems
> with exported slf4j and log4j2 artifacts getting into the classpath. That
> is: hadoop-common pulling in log4j 1.2 and 2.x bindings.
>
> HADOOP-19084 fixes this; the build file now includes a target to scan the
> dependencies and fail if "forbidden" artifacts are found. I have not been
> able to stop logback ending on the transitive dependency list, but at least
> there is only one slf4j there.
>
> HADOOP-18088. Replace log4j 1.x with reload4j switches over to reload4j
> while the move to v2 is still something we have to consider a WiP.
>
> I have tried doing some other changes to the packaging this week
> - creating a lean distro without the AWS SDK
> - trying to get protobuf-2.5 out of yarn-api
> However, I think it is too late to try applying patches this risky.
>
> I Believe we should get the 3.4.0 release out for people to start playing
> with while we rapidly iterate 3.4.1 release out with
> - updated dependencies (where possible)
> - separate "lean" and "full" installations, where "full" includes all the
> cloud connectors and their dependencies; the default is lean and doesn't.
> That will cut the default download size in half.
> - critical issues which people who use the 3.4.0 release raise with us.
>
> That is: a packaging and bugs release, with a minimal number of new
> features.
>
> I've created HADOOP-19087
>  to cover this,
> I'm willing to get my hands dirty here -Shilun Fan and Xiaoqiao He have put
> a lot of work on 3.4.0 and probably need other people to take up the work
> for next release. Who else is willing to participate? (Yes Mukund, I have
> you in mind too)
>
> One thing I would like to visit is: what hadoop-tools modules can we cut?
> Are rumen and hadoop-streaming being actively used? Or can we consider them
> implicitly EOL and strip. Just think of the maintenance effort we would
> save.
>
> ---
>
> Incidentally, I have tested the arm stuff on my raspberry pi5 which is now
> running 64 bit linux. I believe it is the first time we have qualified a
> Hadoop release with the media player under someone's television.
>
> On Thu, 15 Feb 2024 at 20:41, Mukund Madhav Thakur 
> wrote:
>
> > Thanks, Shilun for putting this together.
> >
> > Tried the below things and everything worked for me.
> >
> > validated checksum and gpg signature.
> > compiled from source.
> > Ran AWS integration tests.
> > untar the binaries and able to access objects in S3 via hadoop fs
> commands.
> > compiled gcs-connector successfully using the 3.4.0 version.
> >
> > qq: what is the difference between RC1 and RC2? apart from some extra
> > patches.
> >
> >
> >
> > On Thu, Feb 15, 2024 at 10:58 AM slfan1989  wrote:
> >
> >> Thank you for explaining this part!
> >>
> >> hadoop-3.4.0-RC2 used the validate-hadoop-client-artifacts tool to
> >> generate
> >> the ARM tar package, which should meet expectations.
> >>
> >> We also look forward to other members helping to verify.
> >>
> >> Best Regards,
> >> Shilun Fan.
> >>
> >> On Fri, Feb 16, 2024 at 12:22 AM Steve Loughran 
> >> wrote:
> >>
> >> >
> >> >
> >> > On Mon, 12 Feb 2024 at 15:32, slfan1989  wrote:
> >> >
> >> >>
> >> >>
> >> >> Note, because the arm64 binaries are built separately on a different
> >> >> platform and JVM, their jar files may not match those of th

Re: [VOTE] Release Apache Hadoop 3.4.0 (RC2)

2024-02-23 Thread Steve Loughran
I have been testing this all week, and a -1 until some very minor changes
go in.


   1. build the arm64 binaries with the same jar artifacts as the x86 one
   2. include ad8b6541117b HADOOP-18088. Replace log4j 1.x with reload4j.
   3. include 80b4bb68159c HADOOP-19084. Prune hadoop-common transitive
   dependencies


For #1 we have automation there in my client-validator module, which I have
moved to be a hadoop-managed project and tried to make more
manageable
https://github.com/apache/hadoop-release-support

This contains an ant project to perform a lot of the documented build
stages, including using SCP to copy down an x86 release tarball and make a
signed copy of this containing (locally built) arm artifacts.

Although that only works with my development environment (macbook m1 laptop
and remote ec2 server), it should be straightforward to make it more
flexible.

It also includes and tests a maven project which imports many of the
hadoop-* pom files and run some test with it; this caught some problems
with exported slf4j and log4j2 artifacts getting into the classpath. That
is: hadoop-common pulling in log4j 1.2 and 2.x bindings.

HADOOP-19084 fixes this; the build file now includes a target to scan the
dependencies and fail if "forbidden" artifacts are found. I have not been
able to stop logback ending on the transitive dependency list, but at least
there is only one slf4j there.

HADOOP-18088. Replace log4j 1.x with reload4j switches over to reload4j
while the move to v2 is still something we have to consider a WiP.

I have tried doing some other changes to the packaging this week
- creating a lean distro without the AWS SDK
- trying to get protobuf-2.5 out of yarn-api
However, I think it is too late to try applying patches this risky.

I Believe we should get the 3.4.0 release out for people to start playing
with while we rapidly iterate 3.4.1 release out with
- updated dependencies (where possible)
- separate "lean" and "full" installations, where "full" includes all the
cloud connectors and their dependencies; the default is lean and doesn't.
That will cut the default download size in half.
- critical issues which people who use the 3.4.0 release raise with us.

That is: a packaging and bugs release, with a minimal number of new
features.

I've created HADOOP-19087
 to cover this,
I'm willing to get my hands dirty here -Shilun Fan and Xiaoqiao He have put
a lot of work on 3.4.0 and probably need other people to take up the work
for next release. Who else is willing to participate? (Yes Mukund, I have
you in mind too)

One thing I would like to visit is: what hadoop-tools modules can we cut?
Are rumen and hadoop-streaming being actively used? Or can we consider them
implicitly EOL and strip. Just think of the maintenance effort we would
save.

---

Incidentally, I have tested the arm stuff on my raspberry pi5 which is now
running 64 bit linux. I believe it is the first time we have qualified a
Hadoop release with the media player under someone's television.

On Thu, 15 Feb 2024 at 20:41, Mukund Madhav Thakur 
wrote:

> Thanks, Shilun for putting this together.
>
> Tried the below things and everything worked for me.
>
> validated checksum and gpg signature.
> compiled from source.
> Ran AWS integration tests.
> untar the binaries and able to access objects in S3 via hadoop fs commands.
> compiled gcs-connector successfully using the 3.4.0 version.
>
> qq: what is the difference between RC1 and RC2? apart from some extra
> patches.
>
>
>
> On Thu, Feb 15, 2024 at 10:58 AM slfan1989  wrote:
>
>> Thank you for explaining this part!
>>
>> hadoop-3.4.0-RC2 used the validate-hadoop-client-artifacts tool to
>> generate
>> the ARM tar package, which should meet expectations.
>>
>> We also look forward to other members helping to verify.
>>
>> Best Regards,
>> Shilun Fan.
>>
>> On Fri, Feb 16, 2024 at 12:22 AM Steve Loughran 
>> wrote:
>>
>> >
>> >
>> > On Mon, 12 Feb 2024 at 15:32, slfan1989  wrote:
>> >
>> >>
>> >>
>> >> Note, because the arm64 binaries are built separately on a different
>> >> platform and JVM, their jar files may not match those of the x86
>> >> release -and therefore the maven artifacts. I don't think this is
>> >> an issue (the ASF actually releases source tarballs, the binaries are
>> >> there for help only, though with the maven repo that's a bit blurred).
>> >>
>> >> The only way to be consistent would actually untar the x86.tar.gz,
>> >> overwrite its binaries with the arm stuff, retar, sign and push out
>> >> for the vote.
>> >
>> >
>> >
>> > that's exactly what the "arm.release" target in my client-validator
>> does.
>> > builds an arm tar with the x86 binaries but the arm native libs, signs
>> it.
>> >
>> >
>> >
>> >> Even automating that would be risky.
>> >>
>> >>
>> > automating is the *only* way to do it; apache ant has everything needed
>> > for this including the ability to run gpg.
>> >
>> > we did this on the relevan

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2024-02-23 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/1507/

[Feb 20, 2024, 6:58:49 PM] (github) Fixes HDFS-17181 by routing all CREATE 
requests to the BlockManager (#6108)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org