Apache Hadoop qbt Report: trunk+JDK11 on Linux/x86_64

2021-04-27 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86_64/160/

No changes




-1 overall


The following subsystems voted -1:
blanks mvnsite pathlen spotbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

spotbugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Redundant nullcheck of oldLock, which is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:is known to be non-null in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.isPreUpgradableLayout(Storage$StorageDirectory)
 Redundant null check at DataStorage.java:[line 694] 
   
org.apache.hadoop.hdfs.server.namenode.top.window.RollingWindowManager$UserCounts
 doesn't override java.util.ArrayList.equals(Object) At 
RollingWindowManager.java:At RollingWindowManager.java:[line 1] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 343] 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 356] 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 333] 

spotbugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:[line 343] 
   Redundant nullcheck of it, which is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 NMStateStoreService$LocalResourceTrackerState) Redundant null check at 
ResourceLocalizationService.java:is known to be non-null in 
org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.ResourceLocalizationService.recoverTrackerResources(LocalResourcesTracker,
 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2021-04-27 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/490/

[Apr 26, 2021 3:27:15 AM] (Wei-Chiu Chuang) Revert "HDFS-15624. fix the 
function of setting quota by storage type (#2377)"
[Apr 26, 2021 3:39:50 AM] (Wei-Chiu Chuang) HDFS-15566. NN restart fails after 
RollingUpgrade from 3.1.3/3.2.1 to 3.3.0. Contributed by Brahma Reddy Battula.
[Apr 26, 2021 4:29:28 AM] (Takanobu Asanuma) HDFS-15967. Improve the log for 
Short Circuit Local Reads. Contributed by Bhavik Patel.
[Apr 26, 2021 6:04:52 AM] (noreply) HADOOP-17661. mvn versions:set fails to 
parse pom.xml. (#2952)
[Apr 26, 2021 6:48:39 AM] (noreply) HADOOP-17650. Bump solr to unblock build 
failure with Maven 3.8.1 (#2939)
[Apr 26, 2021 8:42:32 AM] (Wei-Chiu Chuang) Revert "HADOOP-17661. mvn 
versions:set fails to parse pom.xml. (#2952)"
[Apr 26, 2021 9:38:43 AM] (noreply) HDFS-15991. Add location into datanode info 
for NameNodeMXBean (#2933)
[Apr 26, 2021 10:00:23 AM] (noreply) HDFS-15621. Datanode DirectoryScanner uses 
excessive memory (#2849). Contributed by Stephen O'Donnell
[Apr 24, 2021 8:10:10 AM] (Peter Bacsko) YARN-10637. fs2cs: add queue 
autorefresh policy during conversion. Contributed by Qi Zhu.




-1 overall


The following subsystems voted -1:
blanks pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots 
   hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.tools.fedbalance.procedure.TestBalanceProcedureScheduler 
   hadoop.tools.fedbalance.TestDistCpProcedure 
   hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
  

   cc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/490/artifact/out/results-compile-cc-root.txt
 [96K]

   javac:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/490/artifact/out/results-compile-javac-root.txt
 [372K]

   blanks:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/490/artifact/out/blanks-eol.txt
 [13M]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/490/artifact/out/blanks-tabs.txt
 [2.0M]

   checkstyle:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/490/artifact/out/results-checkstyle-root.txt
 [16M]

   pathlen:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/490/artifact/out/results-pathlen.txt
 [16K]

   pylint:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/490/artifact/out/results-pylint.txt
 [20K]

   shellcheck:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/490/artifact/out/results-shellcheck.txt
 [28K]

   xml:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/490/artifact/out/xml.txt
 [24K]

   javadoc:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/490/artifact/out/results-javadoc-javadoc-root.txt
 [1.1M]

   unit:

  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/490/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 [492K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/490/artifact/out/patch-unit-hadoop-tools_hadoop-federation-balance.txt
 [12K]
  
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/490/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
 [52K]
 

Java 8 Lambdas

2021-04-27 Thread Eric Badger
Hello all,

I'd like to gauge the community on the usage of lambdas within Hadoop code.
I've been reviewing a lot of patches recently that either add or modify
lambdas and I'm beginning to think that sometimes we, as a community, are
writing lambdas because we can rather than because we should. To me, it
seems that lambdas often decrease the readability of the code, making it
more difficult to understand. I don't personally know a lot about the
performance of lambdas and welcome arguments on behalf of why lambdas
should be used. An additional argument is that lambdas aren't available in
Java 7, and branch-2.10 currently supports Java 7. So any code going back
to branch-2.10 has to be redone upon backporting. Anyway, my main point
here is to encourage us to rethink whether we should be using lambdas in
any given circumstance just because we can.

Eric

p.s. I'm also happy to accept this as my personal "old man yells at cloud"
issue if everyone else thinks lambdas are the greatest


[jira] [Created] (YARN-10759) Encapsulate queue config modes

2021-04-27 Thread Andras Gyori (Jira)
Andras Gyori created YARN-10759:
---

 Summary: Encapsulate queue config modes
 Key: YARN-10759
 URL: https://issues.apache.org/jira/browse/YARN-10759
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacity scheduler
Reporter: Andras Gyori
Assignee: Andras Gyori


Capacity Scheduler queues have three modes:
 * relative/percentage
 * weight
 * absolute

Most of them have their own:
 * validation logic
 * config setting logic
 * effective capacity calculation logic

These logics can be easily extracted and encapsulated in separate config mode 
classes. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-10758) Mixed mode: Allow relative and absolute mode in the same queue hierarchy

2021-04-27 Thread Andras Gyori (Jira)
Andras Gyori created YARN-10758:
---

 Summary: Mixed mode: Allow relative and absolute mode in the same 
queue hierarchy
 Key: YARN-10758
 URL: https://issues.apache.org/jira/browse/YARN-10758
 Project: Hadoop YARN
  Issue Type: Sub-task
  Components: capacity scheduler
Reporter: Andras Gyori
Assignee: Andras Gyori


Fair Scheduler supports mixed mode for maximum capacity. An example scenario of 
such configuration:
{noformat}
root.a.capacity [memory-mb=7268, vcores=8]{noformat}
{noformat}
root.a.a1.capacity 50{noformat}
{noformat}
root.a.a2.capacity 50{noformat}
Capacity Scheduler already permits using weight mode and relative/percentage 
mode in the same hierarchy, however, the absolute mode and relative mode is 
mutually exclusive. This improvement is a natural extension of CS to lift this 
limitation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2021-04-27 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint mvnsite pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.fs.TestFileUtil 
   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.hdfs.TestFileCorruption 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.server.datanode.TestBlockRecovery 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperJournalManager 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   
hadoop.yarn.server.resourcemanager.monitor.invariants.TestMetricsInvariantChecker
 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/diff-compile-javac-root.txt
  [496K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   mvnsite:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/patch-mvnsite-root.txt
  [584K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/diff-patch-pylint.txt
  [48K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/diff-patch-shelldocs.txt
  [48K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   findbugs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/branch-findbugs-hadoop-common-project_hadoop-annotations.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/branch-findbugs-hadoop-maven-plugins.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/branch-findbugs-hadoop-common-project_hadoop-minikdc.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/branch-findbugs-hadoop-common-project_hadoop-auth.txt
  [8.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/branch-findbugs-hadoop-common-project_hadoop-auth-examples.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common.txt
  [132K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/branch-findbugs-hadoop-common-project_hadoop-nfs.txt
  [4.0K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/281/artifact/out/branch-findbugs-hadoop-common-project_hadoop-kms.txt
  [8.0K]
   

[jira] [Created] (YARN-10757) jsonschema2pojo-maven-plugin version is not defined

2021-04-27 Thread Akira Ajisaka (Jira)
Akira Ajisaka created YARN-10757:


 Summary: jsonschema2pojo-maven-plugin version is not defined
 Key: YARN-10757
 URL: https://issues.apache.org/jira/browse/YARN-10757
 Project: Hadoop YARN
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka


The below maven plugin version is not defined.
{noformat}
$ mvn install  
[INFO] Scanning for projects...
[WARNING] 
[WARNING] Some problems were encountered while building the effective model for 
org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:3.4.0-SNAPSHOT
[WARNING] 'build.plugins.plugin.version' for 
org.jsonschema2pojo:jsonschema2pojo-maven-plugin is missing. @ line 448, column 
15
[WARNING] 
[WARNING] It is highly recommended to fix these problems because they threaten 
the stability of your build.
[WARNING] 
[WARNING] For this reason, future Maven versions might no longer support 
building such malformed projects.
[WARNING] 
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [DISCUSS] hadoop-thirdparty 1.1.0 release

2021-04-27 Thread Wei-Chiu Chuang
I'll start preparing the release vote for third-party 1.1.0. Thanks all for
help!

On Mon, Apr 26, 2021 at 7:20 PM Wei-Chiu Chuang 
wrote:

> Thanks. I created a Jenkins job to upload the SNAPSHOT to Apache nexus
> repository.
>
> https://ci-hadoop.apache.org/view/Hadoop/job/Hadoop-thirdparty-trunk-Commit/
>
> I can see the new artifacts uploaded. Let's see if the main Hadoop repo
> precommit can consume the bits.
>
> On Mon, Apr 26, 2021 at 6:02 PM Ayush Saxena  wrote:
>
>> Yep, you have to do it manually
>>
>> -Ayush
>>
>> On 26-Apr-2021, at 3:23 PM, Wei-Chiu Chuang  wrote:
>>
>> 
>> Does anyone know how we publish hadoop-thirdparty SNAPSHOT artifacts?
>>
>> The main Hadoop arifacts are published by this job
>> https://ci-hadoop.apache.org/view/Hadoop/job/Hadoop-trunk-Commit/ after
>> every commit.
>> However, we don't seem to publish hadoop-thirdparty regularly. (Apache
>> nexus:
>> https://repository.apache.org/content/repositories/snapshots/org/apache/hadoop/thirdparty/
>> )
>>
>> Are they published manually?
>>
>>
>> On Fri, Apr 23, 2021 at 6:06 PM Ayush Saxena  wrote:
>>
>>> Regarding Guava: before release once you merge the change to thrirdparty
>>> repo, can update the hadoop thirdparty snapshot, the hadoop code would pick
>>> that up, and watch out everything is safe and clean before release. Unless
>>> you have a better way to verify or already verified!!!
>>>
>>> -Ayush
>>>
>>> > On 23-Apr-2021, at 3:16 PM, Wei-Chiu Chuang
>>>  wrote:
>>> >
>>> > Another suggestion: looks like the shaded jaeger is not being used by
>>> > Hadoop code. Maybe we can remove that from the release for now? I don't
>>> > want to release something that's not being used.
>>> > We can release the shaded jaeger when it's ready for use. We will have
>>> to
>>> > update the jaeger version anyway. The version used is too old.
>>> >
>>> >> On Fri, Apr 23, 2021 at 10:55 AM Wei-Chiu Chuang 
>>> wrote:
>>> >>
>>> >> Hi community,
>>> >>
>>> >> In preparation of the Hadoop 3.3.1 release, I am starting a thread to
>>> >> discuss its prerequisite: the release of hadoop-thirdparty 1.1.0.
>>> >>
>>> >> My plan:
>>> >> update guava to 30.1.1 (latest). I have the PR ready to merge.
>>> >>
>>> >> Do we want to update protobuf and jaeger? Anything else?
>>> >>
>>> >> I suppose we won't update protobuf too frequently.
>>> >> Jaeger is under active development. We're currently on 0.34.2, the
>>> latest
>>> >> is 1.22.0.
>>> >>
>>> >> If there is no change to this plan, I can start the release work as
>>> soon
>>> >> as possible.
>>> >>
>>>
>>> -
>>> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
>>> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>>>
>>>