Re: [DISCUSS] Removal of old hamlet package in 3.3+ or 3.4+

2020-01-15 Thread Masatake Iwasaki

I'm basically +1 on removing the package.
The patch of YARN-9279 is still applicable to trunk.

If we are still unable to remove the package due to downstream projects,
another option could be always excluding the hamlet package from javadoc 
build

as done in java9 profile only now?

  
maven-javadoc-plugin
    
org.apache.hadoop.yarn.webapp.hamlet
    
  

On 2020/01/15 13:36, Akira Ajisaka wrote:

Hi folks,

Now I have a strong reason to remove the old hamlet package.

There are 1000+ javadoc warnings in the package and they fill the output of
the precommit javadoc module.
That's why the new warnings and errors are ignored and sometimes they cause
build errors.

Now I'm investigating why the precommit job ignores new javadoc
warnings/errors
in https://issues.apache.org/jira/browse/HADOOP-16802. I'd like to remove
the package to make the investigation easier.

Regards,
Akira

On Thu, Feb 14, 2019 at 3:18 PM Akira Ajisaka  wrote:


Thanks Masatake for your comments.
Added the other Hadoop mailing lists to Cc.


I'm +1 on making imcompatible change if this blocks another Java

migration issues, while I don't see strong reason to hurry as I see the
patch of YARN-9279.
Agreed.

-Akira

On Sun, Feb 10, 2019 at 2:00 AM Masatake Iwasaki
 wrote:

Thanks for working on this, Akira.

  > The only usage I can see is Apache Slider, however, the
  > functionalities of Apache Slider have been merged into YARN.

Do we have mailing lists other than yarn-dev to reach downstream

developers?

It would be better to make it confident that old hamlet of Hadoop 3 is
used nowhere.

I'm +1 on making imcompatible change if this blocks another Java
migration issues,
while I don't see strong reason to hurry as I see the patch of YARN-9279.

Masatake Iwasaki

On 2/3/19 18:10, Akira Ajisaka wrote:

Filed https://issues.apache.org/jira/browse/YARN-9279 to remove the
old hamlet package.

-Akira

2019年1月21日(月) 13:08 Akira Ajisaka :

Hi folks,

I'd like to remove the deprecated hamlet package to reduce the

maintenance cost.

The old hamlet package has one character '_' and it is banned in Java
9+, so HADOOP-11875 deprecated this package and created a profile in
pom.xml not to compile the package when the Java version is 9+. After
the deprecation, we still have to maintenance the profile (see
YARN-8123 and HADOOP-16046).

The only usage I can see is Apache Slider, however, the
functionalities of Apache Slider have been merged into YARN. Therefore
I think there are no people using Slider with Hadoop 3.1+ and we can
remove the package in 3.3+.

Any thoughts?

Regards,
Akira

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-10087) ATS possible NPE on REST API when data is missing

2020-01-15 Thread Wilfred Spiegelenburg (Jira)
Wilfred Spiegelenburg created YARN-10087:


 Summary: ATS possible NPE on REST API when data is missing
 Key: YARN-10087
 URL: https://issues.apache.org/jira/browse/YARN-10087
 Project: Hadoop YARN
  Issue Type: Bug
  Components: ATSv2
Reporter: Wilfred Spiegelenburg


If the data stored by the ATS is not complete REST calls to the ATS can return 
a NPE instead of results.

{{{"exception":"NullPointerException","javaClassName":"java.lang.NullPointerException"}}}

The issue shows up when the ATS was down for a short period and in that time 
new applications were started. This causes certain parts of the application 
data to be missing in the ATS store. In most cases this is not a problem and 
data will be returned but when you start filtering data the filtering fails 
throwing the NPE.
 In this case the request was for: 
{{http://:8188/ws/v1/applicationhistory/apps?user=hive'}}

If certain pieces of data are missing the ATS should not even consider 
returning that data, filtered or not. We should not display partial or 
incomplete data.
 In case of the missing user information ACL checks cannot be correctly 
performed and we could see more issues.

A similar issue was fixed in YARN-7118 where the queue details were missing. It 
just _skips_ the app to prevent the NPE but that is not the correct thing when 
the user is missing



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-10086) AWS AssumedRoleCredentialProvider needs ExternalId add

2020-01-15 Thread Jon Hartlaub (Jira)
Jon Hartlaub created YARN-10086:
---

 Summary: AWS AssumedRoleCredentialProvider needs ExternalId add
 Key: YARN-10086
 URL: https://issues.apache.org/jira/browse/YARN-10086
 Project: Hadoop YARN
  Issue Type: Improvement
Affects Versions: 3.2.1
Reporter: Jon Hartlaub


AWS has added a security feature to the assume-role function in the form of the 
"ExternalId" key in the AWS Java SDK 
{{STSAssumeRoleSessionCredentialsProvider.Builder}} class.  To support this 
security feature, the hadoop aws {{AssumedRoleCredentialProvider}} needs a 
patch to include this value from the configuration as well as an added Constant 
to the {{org.apache.hadoop.fs.s3a.Constants}} file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-01-15 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.TestMultipleNNPortQOP 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.mapreduce.v2.TestSpeculativeExecutionWithMRApp 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/diff-compile-cc-root-jdk1.8.0_232.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/diff-compile-javac-root-jdk1.8.0_232.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_232.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [244K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/567/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [96K]
   

[jira] [Created] (YARN-10085) FS-CS converter: remove mixed ordering policy check

2020-01-15 Thread Peter Bacsko (Jira)
Peter Bacsko created YARN-10085:
---

 Summary: FS-CS converter: remove mixed ordering policy check
 Key: YARN-10085
 URL: https://issues.apache.org/jira/browse/YARN-10085
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Peter Bacsko
Assignee: Peter Bacsko


When YARN-9892 gets committed, this part will become unnecessary:

{noformat}
// Validate ordering policy
if (queueConverter.isDrfPolicyUsedOnQueueLevel()) {
  if (queueConverter.isFifoOrFairSharePolicyUsed()) {
throw new ConversionException(
"DRF ordering policy cannot be used together with fifo/fair");
  } else {
capacitySchedulerConfig.set(
CapacitySchedulerConfiguration.RESOURCE_CALCULATOR_CLASS,
DominantResourceCalculator.class.getCanonicalName());
  }
}
{noformat}

We will be able to freely mix fifo/fair/drf, so let's get rid of this strict 
check and also rewrite {{FSQueueConverter.emitOrderingPolicy()}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org