Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2022-05-09 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/300/

[May 7, 2022 10:38:32 PM] (noreply) HADOOP-16515. Update the link to 
compatibility guide (#4226)
[May 7, 2022 11:05:34 PM] (noreply) HDFS-16185. Fix comment in 
LowRedundancyBlocks.java (#4194)
[May 7, 2022 11:09:24 PM] (noreply) HADOOP-17479. Fix the examples of hadoop 
config prefix (#4197)
[May 8, 2022 6:17:13 PM] (noreply) HDFS-16572. Fix typo in readme of 
hadoop-project-dist




-1 overall


The following subsystems voted -1:
blanks pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 695] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MappableBlockLoader.verifyChecksum(long,
 FileInputStream, FileChannel, String) Redundant null check at 
MappableBlockLoader.java:[line 138] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.MemoryMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at MemoryMappableBlockLoader.java:[line 75] 
   Redundant nullcheck of blockChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.load(long,
 FileInputStream, FileInputStream, String, ExtendedBlockId) Redundant null 
check at NativePmemMappableBlockLoader.java:[line 85] 
   Redundant nullcheck of metaChannel, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.NativePmemMappableBlockLoader.verifyChecksumAndMapBlock(NativeIO$POSIX$PmemMappedRegion,
 long, FileInputStream, FileChannel, String) Redundant null check at 
NativePmemMappableBlockLoader.java:[line 130] 
   
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts
 doesn't override java.util.ArrayList.equals(Object) At 
RollingWindowManager.java:At RollingWindowManager.java:[line 1] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 

Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2022-05-09 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/657/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.TestAclsEndToEnd 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.mapreduce.lib.input.TestLineRecordReader 
   hadoop.mapred.TestLineRecordReader 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/diff-compile-javac-root.txt
  [476K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/diff-checkstyle-root.txt
  [14M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/patch-mvnsite-root.txt
  [560K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/diff-patch-pylint.txt
  [20K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/diff-patch-shellcheck.txt
  [72K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/patch-javadoc-root.txt
  [40K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [216K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [436K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [128K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt
  [20K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt
  [28K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/644/artifact/out/patch-unit-hadoop-tools_hadoop-resourceestimator.txt
  [16K]

   asflicense:

   

Re: [VOTE] Release Apache Hadoop 3.3.3

2022-05-09 Thread Steve Loughran
I've done another docker build and the client jars appear to be there. I'll
test tomorrow before putting up another vote. it will be exactly the same
commit as before, just recompiled/republished

On Mon, 9 May 2022 at 17:45, Chao Sun  wrote:

> Agreed, that step #10 is out-dated and should be removed (I skipped
> that when releasing Hadoop 3.3.2 but didn't update it, sorry).
>
> > How about using
> https://repository.apache.org/content/repositories/orgapachehadoop-1348/
>
> Akira, I tried this too but it didn't work. I think we'd need the
> artifacts to be properly pushed to the staging repository.
>
> > Could you please let me know how I can consume the Hadoop 3 jars in
> maven?
>
> Gautham (if you are following this thread), you'll need to add the
> following:
>
> 
>staged
>staged-releases
>https://repository.apache.org/content/repositories/staging/
> 
>
>  true
>
>
>  true
>
>  
>
> to the `` section in your Maven pom.xml file.
>
> On Mon, May 9, 2022 at 8:52 AM Steve Loughran
>  wrote:
> >
> > I didn't do that as the docker image was doing it itself...I discussed
> this
> > with Akira and Ayush & they agreed. so whatever went wrong. it was
> > something else.
> >
> > I have been building a list of things I'd like to change there; cutting
> > that line was one of them. but I need to work out the correct workflow.
> >
> > trying again, and creating a stub module to verify the client is in
> staging
> >
> > On Mon, 9 May 2022 at 15:19, Masatake Iwasaki <
> iwasak...@oss.nttdata.co.jp>
> > wrote:
> >
> > > It seems to be caused by obsolete instruction in HowToRelease Wiki?
> > >
> > > After HADOOP-15058, `mvn deploy` is kicked by
> > > `dev-support/bin/create-release --asfrelease`.
> > > https://issues.apache.org/jira/browse/HADOOP-15058
> > >
> > > Step #10 in "Creating the release candidate (X.Y.Z-RC)" section
> > > of the Wiki still instructs to run `mvn deploy` with `-DskipShade`.
> > >
> > > 2 sets of artifact are deployed after creating RC based on the
> instruction.
> > > The latest one contains empty shaded jars.
> > >
> > > hadoop-client-api and hadoop-client-runtime of already released 3.2.3
> > > looks having same issue...
> > >
> > > Masatake Iwasaki
> > >
> > > On 2022/05/08 6:45, Akira Ajisaka wrote:
> > > > Hi Chao,
> > > >
> > > > How about using
> > > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1348/
> > > > instead of
> https://repository.apache.org/content/repositories/staging/ ?
> > > >
> > > > Akira
> > > >
> > > > On Sat, May 7, 2022 at 10:52 AM Ayush Saxena 
> wrote:
> > > >
> > > >> Hmm, I see the artifacts ideally should have got overwritten by the
> new
> > > >> RC, but they didn’t. The reason seems like the staging path shared
> > > doesn’t
> > > >> have any jars…
> > > >> That is why it was picking the old jars. I think Steve needs to run
> mvn
> > > >> deploy again…
> > > >>
> > > >> Sent from my iPhone
> > > >>
> > > >>> On 07-May-2022, at 7:12 AM, Chao Sun  wrote:
> > > >>>
> > > >>> 
> > > 
> > >  Chao can you use the one that Steve mentioned in the mail?
> > > >>>
> > > >>> Hmm how do I do that? Typically after closing the RC in nexus the
> > > >>> release bits will show up in
> > > >>>
> > > >>
> > >
> https://repository.apache.org/content/repositories/staging/org/apache/hadoop
> > > >>> and Spark build will be able to pick them up for testing. However
> in
> > > >>> this case I don't see any 3.3.3 jars in the URL.
> > > >>>
> > >  On Fri, May 6, 2022 at 6:24 PM Ayush Saxena 
> > > wrote:
> > > 
> > >  There were two 3.3.3 staged. The earlier one was with skipShade,
> the
> > > >> date was also april 22, I archived that. Chao can you use the one
> that
> > > >> Steve mentioned in the mail?
> > > 
> > > > On Sat, 7 May 2022 at 06:18, Chao Sun 
> wrote:
> > > >
> > > > Seems there are some issues with the shaded client as I was not
> able
> > > > to compile Apache Spark with the RC
> > > > (https://github.com/apache/spark/pull/36474). Looks like it's
> > > compiled
> > > > with the `-DskipShade` option and the hadoop-client-api JAR
> doesn't
> > > > contain any class:
> > > >
> > > > ➜  hadoop-client-api jar tf 3.3.3/hadoop-client-api-3.3.3.jar
> > > > META-INF/
> > > > META-INF/MANIFEST.MF
> > > > META-INF/NOTICE.txt
> > > > META-INF/LICENSE.txt
> > > > META-INF/maven/
> > > > META-INF/maven/org.apache.hadoop/
> > > > META-INF/maven/org.apache.hadoop/hadoop-client-api/
> > > > META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.xml
> > > > META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.properties
> > > >
> > > > On Fri, May 6, 2022 at 4:24 PM Stack  wrote:
> > > >>
> > > >> +1 (binding)
> > > >>
> > > >>   * Signature: ok
> > > >>   * Checksum : passed
> > > >>   * Rat check (1.8.0_191): passed
> > > >>- mvn clean 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2022-05-09 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/864/

[May 8, 2022 6:17:13 PM] (noreply) HDFS-16572. Fix typo in readme of 
hadoop-project-dist




-1 overall


The following subsystems voted -1:
blanks pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMSWithZK 
   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.cli.TestHDFSCLI 
   hadoop.hdfs.TestClientProtocolForPipelineRecovery 
   
hadoop.yarn.server.timeline.security.TestTimelineAuthenticationFilterForV1 
   hadoop.yarn.server.timeline.webapp.TestTimelineWebServicesWithSSL 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   
hadoop.yarn.server.resourcemanager.metrics.TestCombinedSystemMetricsPublisher 
   
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesDelegationTokenAuthentication
 
   hadoop.yarn.server.resourcemanager.metrics.TestSystemMetricsPublisher 
   hadoop.yarn.server.resourcemanager.webapp.TestRMWebappAuthentication 
   hadoop.yarn.webapp.TestRMWithXFSFilter 
   hadoop.yarn.server.resourcemanager.TestRMHA 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.yarn.client.TestResourceManagerAdministrationProtocolPBClientImpl 
   hadoop.yarn.client.TestGetGroups 
   hadoop.mapred.TestLocalDistributedCacheManager 
   hadoop.hdfs.server.federation.security.TestRouterSecurityManager 
   hadoop.yarn.server.router.webapp.TestRouterWebServicesREST 
   hadoop.yarn.sls.nodemanager.TestNMSimulator 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.yarn.sls.TestSLSGenericSynth 
   hadoop.yarn.sls.TestSLSDagAMSimulator 
   hadoop.yarn.sls.TestSLSStreamAMSynth 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
   hadoop.yarn.sls.TestReservationSystemInvariants 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/864/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/864/artifact/out/results-compile-javac-root.txt
 [340K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/864/artifact/out/blanks-eol.txt
 [13M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/864/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/864/artifact/out/results-checkstyle-root.txt
 [14M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/864/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/864/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/864/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/864/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/864/artifact/out/results-javadoc-javadoc-root.txt
 [400K]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/864/artifact/out/patch-unit-hadoop-common-project_hadoop-kms.txt
 [428K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/864/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 [580K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/864/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
 [96K]
  

Re: [VOTE] Release Apache Hadoop 3.3.3

2022-05-09 Thread Chao Sun
Agreed, that step #10 is out-dated and should be removed (I skipped
that when releasing Hadoop 3.3.2 but didn't update it, sorry).

> How about using 
> https://repository.apache.org/content/repositories/orgapachehadoop-1348/

Akira, I tried this too but it didn't work. I think we'd need the
artifacts to be properly pushed to the staging repository.

> Could you please let me know how I can consume the Hadoop 3 jars in maven?

Gautham (if you are following this thread), you'll need to add the following:


   staged
   staged-releases
   https://repository.apache.org/content/repositories/staging/
   
 true
   
   
 true
   
 

to the `` section in your Maven pom.xml file.

On Mon, May 9, 2022 at 8:52 AM Steve Loughran
 wrote:
>
> I didn't do that as the docker image was doing it itself...I discussed this
> with Akira and Ayush & they agreed. so whatever went wrong. it was
> something else.
>
> I have been building a list of things I'd like to change there; cutting
> that line was one of them. but I need to work out the correct workflow.
>
> trying again, and creating a stub module to verify the client is in staging
>
> On Mon, 9 May 2022 at 15:19, Masatake Iwasaki 
> wrote:
>
> > It seems to be caused by obsolete instruction in HowToRelease Wiki?
> >
> > After HADOOP-15058, `mvn deploy` is kicked by
> > `dev-support/bin/create-release --asfrelease`.
> > https://issues.apache.org/jira/browse/HADOOP-15058
> >
> > Step #10 in "Creating the release candidate (X.Y.Z-RC)" section
> > of the Wiki still instructs to run `mvn deploy` with `-DskipShade`.
> >
> > 2 sets of artifact are deployed after creating RC based on the instruction.
> > The latest one contains empty shaded jars.
> >
> > hadoop-client-api and hadoop-client-runtime of already released 3.2.3
> > looks having same issue...
> >
> > Masatake Iwasaki
> >
> > On 2022/05/08 6:45, Akira Ajisaka wrote:
> > > Hi Chao,
> > >
> > > How about using
> > > https://repository.apache.org/content/repositories/orgapachehadoop-1348/
> > > instead of https://repository.apache.org/content/repositories/staging/ ?
> > >
> > > Akira
> > >
> > > On Sat, May 7, 2022 at 10:52 AM Ayush Saxena  wrote:
> > >
> > >> Hmm, I see the artifacts ideally should have got overwritten by the new
> > >> RC, but they didn’t. The reason seems like the staging path shared
> > doesn’t
> > >> have any jars…
> > >> That is why it was picking the old jars. I think Steve needs to run mvn
> > >> deploy again…
> > >>
> > >> Sent from my iPhone
> > >>
> > >>> On 07-May-2022, at 7:12 AM, Chao Sun  wrote:
> > >>>
> > >>> 
> > 
> >  Chao can you use the one that Steve mentioned in the mail?
> > >>>
> > >>> Hmm how do I do that? Typically after closing the RC in nexus the
> > >>> release bits will show up in
> > >>>
> > >>
> > https://repository.apache.org/content/repositories/staging/org/apache/hadoop
> > >>> and Spark build will be able to pick them up for testing. However in
> > >>> this case I don't see any 3.3.3 jars in the URL.
> > >>>
> >  On Fri, May 6, 2022 at 6:24 PM Ayush Saxena 
> > wrote:
> > 
> >  There were two 3.3.3 staged. The earlier one was with skipShade, the
> > >> date was also april 22, I archived that. Chao can you use the one that
> > >> Steve mentioned in the mail?
> > 
> > > On Sat, 7 May 2022 at 06:18, Chao Sun  wrote:
> > >
> > > Seems there are some issues with the shaded client as I was not able
> > > to compile Apache Spark with the RC
> > > (https://github.com/apache/spark/pull/36474). Looks like it's
> > compiled
> > > with the `-DskipShade` option and the hadoop-client-api JAR doesn't
> > > contain any class:
> > >
> > > ➜  hadoop-client-api jar tf 3.3.3/hadoop-client-api-3.3.3.jar
> > > META-INF/
> > > META-INF/MANIFEST.MF
> > > META-INF/NOTICE.txt
> > > META-INF/LICENSE.txt
> > > META-INF/maven/
> > > META-INF/maven/org.apache.hadoop/
> > > META-INF/maven/org.apache.hadoop/hadoop-client-api/
> > > META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.xml
> > > META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.properties
> > >
> > > On Fri, May 6, 2022 at 4:24 PM Stack  wrote:
> > >>
> > >> +1 (binding)
> > >>
> > >>   * Signature: ok
> > >>   * Checksum : passed
> > >>   * Rat check (1.8.0_191): passed
> > >>- mvn clean apache-rat:check
> > >>   * Built from source (1.8.0_191): failed
> > >>- mvn clean install  -DskipTests
> > >>- mvn -fae --no-transfer-progress -DskipTests
> > >> -Dmaven.javadoc.skip=true
> > >> -Pnative -Drequire.openssl -Drequire.snappy -Drequire.valgrind
> > >> -Drequire.zstd -Drequire.test.libhadoop clean install
> > >>   * Unit tests pass (1.8.0_191):
> > >> - HDFS Tests passed (Didn't run more than this).
> > >>
> > >> Deployed a ten node ha hdfs cluster with three namenodes and five
> > >> 

Re: [VOTE] Release Apache Hadoop 3.3.3

2022-05-09 Thread Steve Loughran
I didn't do that as the docker image was doing it itself...I discussed this
with Akira and Ayush & they agreed. so whatever went wrong. it was
something else.

I have been building a list of things I'd like to change there; cutting
that line was one of them. but I need to work out the correct workflow.

trying again, and creating a stub module to verify the client is in staging

On Mon, 9 May 2022 at 15:19, Masatake Iwasaki 
wrote:

> It seems to be caused by obsolete instruction in HowToRelease Wiki?
>
> After HADOOP-15058, `mvn deploy` is kicked by
> `dev-support/bin/create-release --asfrelease`.
> https://issues.apache.org/jira/browse/HADOOP-15058
>
> Step #10 in "Creating the release candidate (X.Y.Z-RC)" section
> of the Wiki still instructs to run `mvn deploy` with `-DskipShade`.
>
> 2 sets of artifact are deployed after creating RC based on the instruction.
> The latest one contains empty shaded jars.
>
> hadoop-client-api and hadoop-client-runtime of already released 3.2.3
> looks having same issue...
>
> Masatake Iwasaki
>
> On 2022/05/08 6:45, Akira Ajisaka wrote:
> > Hi Chao,
> >
> > How about using
> > https://repository.apache.org/content/repositories/orgapachehadoop-1348/
> > instead of https://repository.apache.org/content/repositories/staging/ ?
> >
> > Akira
> >
> > On Sat, May 7, 2022 at 10:52 AM Ayush Saxena  wrote:
> >
> >> Hmm, I see the artifacts ideally should have got overwritten by the new
> >> RC, but they didn’t. The reason seems like the staging path shared
> doesn’t
> >> have any jars…
> >> That is why it was picking the old jars. I think Steve needs to run mvn
> >> deploy again…
> >>
> >> Sent from my iPhone
> >>
> >>> On 07-May-2022, at 7:12 AM, Chao Sun  wrote:
> >>>
> >>> 
> 
>  Chao can you use the one that Steve mentioned in the mail?
> >>>
> >>> Hmm how do I do that? Typically after closing the RC in nexus the
> >>> release bits will show up in
> >>>
> >>
> https://repository.apache.org/content/repositories/staging/org/apache/hadoop
> >>> and Spark build will be able to pick them up for testing. However in
> >>> this case I don't see any 3.3.3 jars in the URL.
> >>>
>  On Fri, May 6, 2022 at 6:24 PM Ayush Saxena 
> wrote:
> 
>  There were two 3.3.3 staged. The earlier one was with skipShade, the
> >> date was also april 22, I archived that. Chao can you use the one that
> >> Steve mentioned in the mail?
> 
> > On Sat, 7 May 2022 at 06:18, Chao Sun  wrote:
> >
> > Seems there are some issues with the shaded client as I was not able
> > to compile Apache Spark with the RC
> > (https://github.com/apache/spark/pull/36474). Looks like it's
> compiled
> > with the `-DskipShade` option and the hadoop-client-api JAR doesn't
> > contain any class:
> >
> > ➜  hadoop-client-api jar tf 3.3.3/hadoop-client-api-3.3.3.jar
> > META-INF/
> > META-INF/MANIFEST.MF
> > META-INF/NOTICE.txt
> > META-INF/LICENSE.txt
> > META-INF/maven/
> > META-INF/maven/org.apache.hadoop/
> > META-INF/maven/org.apache.hadoop/hadoop-client-api/
> > META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.xml
> > META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.properties
> >
> > On Fri, May 6, 2022 at 4:24 PM Stack  wrote:
> >>
> >> +1 (binding)
> >>
> >>   * Signature: ok
> >>   * Checksum : passed
> >>   * Rat check (1.8.0_191): passed
> >>- mvn clean apache-rat:check
> >>   * Built from source (1.8.0_191): failed
> >>- mvn clean install  -DskipTests
> >>- mvn -fae --no-transfer-progress -DskipTests
> >> -Dmaven.javadoc.skip=true
> >> -Pnative -Drequire.openssl -Drequire.snappy -Drequire.valgrind
> >> -Drequire.zstd -Drequire.test.libhadoop clean install
> >>   * Unit tests pass (1.8.0_191):
> >> - HDFS Tests passed (Didn't run more than this).
> >>
> >> Deployed a ten node ha hdfs cluster with three namenodes and five
> >> journalnodes. Ran a ten node hbase (older version of 2.5 branch
> built
> >> against 3.3.2) against it. Tried a small verification job. Good.
> Ran a
> >> bigger job with mild chaos. All seems to be working properly
> >> (recoveries,
> >> logs look fine). Killed a namenode. Failover worked promptly. UIs
> look
> >> good. Poked at the hdfs cli. Seems good.
> >>
> >> S
> >>
> >> On Tue, May 3, 2022 at 4:24 AM Steve Loughran
> >> 
> >> wrote:
> >>
> >>> I have put together a release candidate (rc0) for Hadoop 3.3.3
> >>>
> >>> The RC is available at:
> >>> https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/
> >>>
> >>> The git tag is release-3.3.3-RC0, commit d37586cbda3
> >>>
> >>> The maven artifacts are staged at
> >>>
> >>
> https://repository.apache.org/content/repositories/orgapachehadoop-1348/
> >>>
> >>> You can find my public key at:
> >>> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

[jira] [Created] (HADOOP-18230) Use async drain threshold to decide b/w async and sync draining

2022-05-09 Thread Ahmar Suhail (Jira)
Ahmar Suhail created HADOOP-18230:
-

 Summary: Use async drain threshold to decide b/w async and sync 
draining
 Key: HADOOP-18230
 URL: https://issues.apache.org/jira/browse/HADOOP-18230
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmar Suhail


[https://github.com/apache/hadoop/pull/4294] introduces changes to drain the 
stream asynchronously. We've also recently introduced[ 
ASYNC_DRAIN_THRESHOLD|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java#L612],
 we should use this to decide when to drain asynchronously. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[DISCUSS] Hadoop 3.2.4 release

2022-05-09 Thread Masatake Iwasaki

Hi team,

Shaded client artifacts (hadoop-client-api and hadoop-client-runtime)
of Hadoop 3.2.3 published to Maven turned out to be broken
due to issue of the release process.

In addition, we have enough fixes on branch-3.2 after branch-3.2.3 was 
created[1].
Migration from log4j to reload4j is one of the major issues.

I would like to cut RC of 3.2.4 soon after 3.3.3 release.
I volunteer to take a release manager role as done for 3.2.3.

[1] 
https://issues.apache.org/jira/issues/?filter=12350757=project%20in%20(YARN%2C%20HDFS%2C%20HADOOP%2C%20MAPREDUCE)%20AND%20status%20%3D%20Resolved%20AND%20fixVersion%20%3D%203.2.4

Thanks,
Masatake Iwasaki

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.3.3

2022-05-09 Thread Masatake Iwasaki

Step #10 in "Creating the release candidate (X.Y.Z-RC)" section
of the Wiki still instructs to run `mvn deploy` with `-DskipShade`.


https://cwiki.apache.org/confluence/display/HADOOP2/HowToRelease#HowToRelease-Creatingthereleasecandidate(X.Y.Z-RC%3CN%3E)

On 2022/05/09 23:18, Masatake Iwasaki wrote:

It seems to be caused by obsolete instruction in HowToRelease Wiki?

After HADOOP-15058, `mvn deploy` is kicked by
`dev-support/bin/create-release --asfrelease`.
https://issues.apache.org/jira/browse/HADOOP-15058

Step #10 in "Creating the release candidate (X.Y.Z-RC)" section
of the Wiki still instructs to run `mvn deploy` with `-DskipShade`.

2 sets of artifact are deployed after creating RC based on the instruction.
The latest one contains empty shaded jars.

hadoop-client-api and hadoop-client-runtime of already released 3.2.3
looks having same issue...

Masatake Iwasaki

On 2022/05/08 6:45, Akira Ajisaka wrote:

Hi Chao,

How about using
https://repository.apache.org/content/repositories/orgapachehadoop-1348/
instead of https://repository.apache.org/content/repositories/staging/ ?

Akira

On Sat, May 7, 2022 at 10:52 AM Ayush Saxena  wrote:


Hmm, I see the artifacts ideally should have got overwritten by the new
RC, but they didn’t. The reason seems like the staging path shared doesn’t
have any jars…
That is why it was picking the old jars. I think Steve needs to run mvn
deploy again…

Sent from my iPhone


On 07-May-2022, at 7:12 AM, Chao Sun  wrote:




Chao can you use the one that Steve mentioned in the mail?


Hmm how do I do that? Typically after closing the RC in nexus the
release bits will show up in


https://repository.apache.org/content/repositories/staging/org/apache/hadoop

and Spark build will be able to pick them up for testing. However in
this case I don't see any 3.3.3 jars in the URL.


On Fri, May 6, 2022 at 6:24 PM Ayush Saxena  wrote:

There were two 3.3.3 staged. The earlier one was with skipShade, the

date was also april 22, I archived that. Chao can you use the one that
Steve mentioned in the mail?



On Sat, 7 May 2022 at 06:18, Chao Sun  wrote:

Seems there are some issues with the shaded client as I was not able
to compile Apache Spark with the RC
(https://github.com/apache/spark/pull/36474). Looks like it's compiled
with the `-DskipShade` option and the hadoop-client-api JAR doesn't
contain any class:

➜  hadoop-client-api jar tf 3.3.3/hadoop-client-api-3.3.3.jar
META-INF/
META-INF/MANIFEST.MF
META-INF/NOTICE.txt
META-INF/LICENSE.txt
META-INF/maven/
META-INF/maven/org.apache.hadoop/
META-INF/maven/org.apache.hadoop/hadoop-client-api/
META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.xml
META-INF/maven/org.apache.hadoop/hadoop-client-api/pom.properties

On Fri, May 6, 2022 at 4:24 PM Stack  wrote:


+1 (binding)

  * Signature: ok
  * Checksum : passed
  * Rat check (1.8.0_191): passed
   - mvn clean apache-rat:check
  * Built from source (1.8.0_191): failed
   - mvn clean install  -DskipTests
   - mvn -fae --no-transfer-progress -DskipTests

-Dmaven.javadoc.skip=true

-Pnative -Drequire.openssl -Drequire.snappy -Drequire.valgrind
-Drequire.zstd -Drequire.test.libhadoop clean install
  * Unit tests pass (1.8.0_191):
    - HDFS Tests passed (Didn't run more than this).

Deployed a ten node ha hdfs cluster with three namenodes and five
journalnodes. Ran a ten node hbase (older version of 2.5 branch built
against 3.3.2) against it. Tried a small verification job. Good. Ran a
bigger job with mild chaos. All seems to be working properly

(recoveries,

logs look fine). Killed a namenode. Failover worked promptly. UIs look
good. Poked at the hdfs cli. Seems good.

S

On Tue, May 3, 2022 at 4:24 AM Steve Loughran



wrote:


I have put together a release candidate (rc0) for Hadoop 3.3.3

The RC is available at:
https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/

The git tag is release-3.3.3-RC0, commit d37586cbda3

The maven artifacts are staged at


https://repository.apache.org/content/repositories/orgapachehadoop-1348/


You can find my public key at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

Change log
https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/CHANGELOG.md

Release notes


https://dist.apache.org/repos/dist/dev/hadoop/3.3.3-RC0/RELEASENOTES.md


There's a very small number of changes, primarily critical

code/packaging

issues and security fixes.


   - The critical fixes which shipped in the 3.2.3 release.
   -  CVEs in our code and dependencies
   - Shaded client packaging issues.
   - A switch from log4j to reload4j


reload4j is an active fork of the log4j 1.17 library with the

classes which

contain CVEs removed. Even though hadoop never used those classes,

they

regularly raised alerts on security scans and concen from users.

Switching

to the forked project allows us to ship a secure logging framework.

It will

complicate the builds of downstream maven/ivy/gradle projects which

exclude

our log4j artifacts, as they 

Re: [DISCUSS] Enabling all platform builds in CI for all Hadoop PRs

2022-05-09 Thread Steve Vaughan
+1 on Java 11 for branch-3.3

I'd also like to start addressing the test instability by addressing collisions 
between tests.  Currently the tests are working in the same shared resources 
either directly through the system properties like test.build.data, indirectly 
through utility classes like GenericTestUtils, or modifying shared resources 
within target/test-classes.  I've been working towards cleaning up flaky tests, 
and I plan on submitting some patches today.

From: Steve Loughran 
Sent: Monday, May 9, 2022 7:50 AM
To: Ayush Saxena 
Cc: Gautham Banasandra ; Hadoop Common 
; Hdfs-dev ; yarn-dev 
; d...@hadoop.apache.org 
Subject: Re: [DISCUSS] Enabling all platform builds in CI for all Hadoop PRs

how about for trunk we -wait for it- declare it java11 only?

do that and we cut out a lot of the CI build.

we would have to mandate a test run through branch-3.3 before any
cherrypick there

On Sat, 7 May 2022 at 11:19, Ayush Saxena  wrote:

> Three for trunk:
> Java-8
>
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/
>
> Java-11
>
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/
>
> ARM
>
> https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-linux-ARM-trunk/
>
> -Ayush
>
> >
> > On 07-May-2022, at 11:49 AM, Gautham Banasandra 
> wrote:
> >
> > Hi all,
> >
> > Although validating cross platform builds at pre-commit
> > would be the most effective approach, I understand the
> > huge disadvantage caused by the slowdown. The best
> > way to tackle this would be to enable parallel builds for
> > the different platforms. I had given it a shot about a year
> > ago[1], it didn't go well and ran into all sorts of random
> > errors. I think we should make the parallel builds run on
> > different agents as opposed to starting the builds parallelly
> > on the same agent (which is what I had done earlier).
> >
> > So, I'll settle down to integrating the full suite of platform
> > builds into the nightly builds. Could anyone please point
> > me to the Jenkins job for this?
> >
> > [1] = https://github.com/apache/hadoop/pull/3166
> >
> > Thanks,
> > --Gautham
> >
> >> On Fri, 6 May 2022 at 21:04, Ayush Saxena  wrote:
> >>
> >> From functional point of view it does makes sense to validate all the
> >> platforms as part of the builds, but the Pre commits builds taking time
> is
> >> now no longer a small things, In past one year or may be two, we have
> >> already increased it more than twice as compared to what it was before,
> If
> >> someone has a change in HDFS, which includes both hdfs-client &
> >> hadoop-hdfs, it takes more than 5 hours, which long back was around 2
> hours.
> >> With the current state, I don't think we should just go and add these
> >> extra overheads. Having them as part of the nightly builds does makes
> sense
> >> for now.
> >>
> >> In future if we feel there is a strong need for this and we start to see
> >> very frequent failures in some other platforms and we are left with no
> >> other option but to integrate it in our pre-commit jobs, we can explore
> >> having these build phases running in parallel, along with trying other
> >> phases also to run in parallel like compilation/javadoc builds of JDK-8
> &
> >> JDK-11 can run in parallel and may be explore other opportunities as
> well
> >> to compensate for this time.
> >>
> >> For now lets just integrate it our nightly builds only and circle back
> >> again here in future if the need be.
> >>
> >> -Ayush
> >>
> >>> On Fri, 6 May 2022 at 20:44, Wei-Chiu Chuang 
> wrote:
> >>>
> >>> Running builds for all platforms for each and every PR seems too
> >>> excessive.
> >>>
> >>> How about doing all platform builds in the nightly jobs?
> >>>
> >>> On Fri, May 6, 2022 at 8:02 AM Steve Loughran
>  
> >>> wrote:
> >>>
>  I'm not enthusiastic here as it not only makes the builds slower, it
>  reduces the #of builds we can through a day
> 
>  one thing I am wondering is could we remove java8 support on some
> >>> branches?
> 
>  make branch 3.3.2.x (i.e the 3.3.3 release) the last java 8 build, and
> >>> this
>  summers branch-3.3 release (which I'd rebadge 3.4) would ship as java
> 11
>  only.
>  that would cut buld and test time for those trunk PRs in half...after
> >>> which
>  the preospect of building on more than one platform becomes more
> viable.
> 
>  On Thu, 5 May 2022 at 15:34, Gautham Banasandra 
>  wrote:
> 
> > Hi Hadoop devs,
> >
> > Last week, there was a Hadoop build failure on Debian 10 caused by
> > https://github.com/apache/hadoop/pull/3988. In
> >>> dev-support/jenkins.sh,
> > there's the capability to build and test Hadoop across the supported
> > platforms. Currently, we're limiting this only for those PRs having
> >>> only
> > C/C++ changes[1], since C/C++ changes are more likely to cause
> > cross-platform build issues and bypassing the full platform 

Re: [DISCUSS] Enabling all platform builds in CI for all Hadoop PRs

2022-05-09 Thread Steve Loughran
how about for trunk we -wait for it- declare it java11 only?

do that and we cut out a lot of the CI build.

we would have to mandate a test run through branch-3.3 before any
cherrypick there

On Sat, 7 May 2022 at 11:19, Ayush Saxena  wrote:

> Three for trunk:
> Java-8
>
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/
>
> Java-11
>
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/
>
> ARM
>
> https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-linux-ARM-trunk/
>
> -Ayush
>
> >
> > On 07-May-2022, at 11:49 AM, Gautham Banasandra 
> wrote:
> >
> > Hi all,
> >
> > Although validating cross platform builds at pre-commit
> > would be the most effective approach, I understand the
> > huge disadvantage caused by the slowdown. The best
> > way to tackle this would be to enable parallel builds for
> > the different platforms. I had given it a shot about a year
> > ago[1], it didn't go well and ran into all sorts of random
> > errors. I think we should make the parallel builds run on
> > different agents as opposed to starting the builds parallelly
> > on the same agent (which is what I had done earlier).
> >
> > So, I'll settle down to integrating the full suite of platform
> > builds into the nightly builds. Could anyone please point
> > me to the Jenkins job for this?
> >
> > [1] = https://github.com/apache/hadoop/pull/3166
> >
> > Thanks,
> > --Gautham
> >
> >> On Fri, 6 May 2022 at 21:04, Ayush Saxena  wrote:
> >>
> >> From functional point of view it does makes sense to validate all the
> >> platforms as part of the builds, but the Pre commits builds taking time
> is
> >> now no longer a small things, In past one year or may be two, we have
> >> already increased it more than twice as compared to what it was before,
> If
> >> someone has a change in HDFS, which includes both hdfs-client &
> >> hadoop-hdfs, it takes more than 5 hours, which long back was around 2
> hours.
> >> With the current state, I don't think we should just go and add these
> >> extra overheads. Having them as part of the nightly builds does makes
> sense
> >> for now.
> >>
> >> In future if we feel there is a strong need for this and we start to see
> >> very frequent failures in some other platforms and we are left with no
> >> other option but to integrate it in our pre-commit jobs, we can explore
> >> having these build phases running in parallel, along with trying other
> >> phases also to run in parallel like compilation/javadoc builds of JDK-8
> &
> >> JDK-11 can run in parallel and may be explore other opportunities as
> well
> >> to compensate for this time.
> >>
> >> For now lets just integrate it our nightly builds only and circle back
> >> again here in future if the need be.
> >>
> >> -Ayush
> >>
> >>> On Fri, 6 May 2022 at 20:44, Wei-Chiu Chuang 
> wrote:
> >>>
> >>> Running builds for all platforms for each and every PR seems too
> >>> excessive.
> >>>
> >>> How about doing all platform builds in the nightly jobs?
> >>>
> >>> On Fri, May 6, 2022 at 8:02 AM Steve Loughran
>  
> >>> wrote:
> >>>
>  I'm not enthusiastic here as it not only makes the builds slower, it
>  reduces the #of builds we can through a day
> 
>  one thing I am wondering is could we remove java8 support on some
> >>> branches?
> 
>  make branch 3.3.2.x (i.e the 3.3.3 release) the last java 8 build, and
> >>> this
>  summers branch-3.3 release (which I'd rebadge 3.4) would ship as java
> 11
>  only.
>  that would cut buld and test time for those trunk PRs in half...after
> >>> which
>  the preospect of building on more than one platform becomes more
> viable.
> 
>  On Thu, 5 May 2022 at 15:34, Gautham Banasandra 
>  wrote:
> 
> > Hi Hadoop devs,
> >
> > Last week, there was a Hadoop build failure on Debian 10 caused by
> > https://github.com/apache/hadoop/pull/3988. In
> >>> dev-support/jenkins.sh,
> > there's the capability to build and test Hadoop across the supported
> > platforms. Currently, we're limiting this only for those PRs having
> >>> only
> > C/C++ changes[1], since C/C++ changes are more likely to cause
> > cross-platform build issues and bypassing the full platform build for
> >>> non
> > C/C++ PRs would save a great deal of CI time. However, the build
> >>> failure
> > caused by PR #3988 motivates me to enable the capability to build and
> > test Hadoop for all the supported platforms for ALL the PRs.
> >
> > While this may cause longer CI run duration for each PR, it would
> > immensely minimize the risk of breaking Hadoop across platforms and
> > saves us a lot of debugging time. Kindly post your opinion regarding
> >>> this
> > and I'll move to enable this capability for all PRs if the response
> is
> > sufficiently positive.
> >
> > [1] =
> >
> >
> 
> >>>
> 

[jira] [Created] (HADOOP-18229) Fix Hadoop Common Java Doc Error

2022-05-09 Thread fanshilun (Jira)
fanshilun created HADOOP-18229:
--

 Summary: Fix Hadoop Common Java Doc Error
 Key: HADOOP-18229
 URL: https://issues.apache.org/jira/browse/HADOOP-18229
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: fanshilun






--
This message was sent by Atlassian Jira
(v8.20.7#820007)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org