Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/565/ No changes -1 overall The following subsystems voted -1: asflicense unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.ha.TestZKFailoverController hadoop.hdfs.TestClientProtocolForPipelineRecovery hadoop.hdfs.server.datanode.TestDirectoryScanner hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA hadoop.hdfs.TestReadStripedFileWithMissingBlocks hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.yarn.server.nodemanager.scheduler.TestDistributedScheduler hadoop.yarn.server.resourcemanager.scheduler.capacity.TestContainerAllocation hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesApps hadoop.yarn.server.resourcemanager.scheduler.fair.TestFSLeafQueue hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerLazyPreemption hadoop.yarn.server.resourcemanager.TestSubmitApplicationWithRMHA hadoop.yarn.server.resourcemanager.TestReservationSystemWithRMHA hadoop.yarn.server.resourcemanager.TestRMHA hadoop.yarn.server.resourcemanager.TestRMHAForNodeLabels hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySched hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesAppsModification hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesSchedulerActivities hadoop.yarn.server.resourcemanager.rmcontainer.TestRMContainerImpl hadoop.yarn.server.resourcemanager.TestRMHATimelineCollectors hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacityScheduler hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairScheduler hadoop.yarn.server.resourcemanager.scheduler.fifo.TestFifoScheduler hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerDynamicBehavior hadoop.yarn.server.resourcemanager.TestApplicationMasterService hadoop.yarn.server.resourcemanager.TestApplicationMasterLauncher hadoop.yarn.server.resourcemanager.scheduler.capacity.TestNodeLabelContainerAllocation hadoop.yarn.server.TestContainerManagerSecurity hadoop.mapreduce.TestMRJobClient Timed out junit tests : org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.conf.TestZKConfigurationStore org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore org.apache.hadoop.yarn.server.resourcemanager.TestLeaderElectorService org.apache.hadoop.mapred.pipes.TestPipeApplication cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/565/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/565/artifact/out/diff-compile-javac-root.txt [284K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/565/artifact/out/diff-checkstyle-root.txt [17M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/565/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/565/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/565/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/565/artifact/out/whitespace-eol.txt [8.5M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/565/artifact/out/whitespace-tabs.txt [292K] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/565/artifact/out/diff-javadoc-javadoc-root.txt [760K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/565/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [148K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/565/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [320K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/565/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [40K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/565/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [728K]
Re: [DISCUSS] Feature Branch Merge and Security Audits
Looks good and +1 for markdown documentations to provide per release specific information. On Sat, Oct 21, 2017 at 8:47 AM, larry mccaywrote: > New Revision... > > This revision acknowledges the reality that we often have multiple phases > of feature lifecycle and that we need to account for each phase. > It has also been made more generic. > I have created a Tech Preview Security Audit list and a GA Readiness > Security Audit list. > I've also included suggested items into the GA Readiness list. > > It has also been suggested that we publish the information as part of docs > so that the state of such features can be easily determined from these > pages. We can discuss this aspect as well. > > Thoughts? > > *Tech Preview Security Audit* > For features that are being merged without full security model coverage, > there need to be a base line of assurances that they do not introduce new > attack vectors in deployments that are from actual releases or even just > built from trunk. > > *1. UIs* > > 1.1. Are there new UIs added with this merge? > 1.2. Are they enabled/accessible by default? > 1.3. Are they hosted in existing processes or as part of a new > process/server? > 1.4. If new process/server, is it launched by default? > > *2. APIs* > > 2.1. Are there new REST APIs added with this merge? > 2.2. Are they enabled by default? > 2.3. Are there RPC based APIs added with this merge? > 2.4. Are they enabled by default? > > *3. Secure Clusters* > > 3.1. Is this feature disabled completely in secure deployments? > 3.2. If not, is there some justification as to why it should be available? > > *4. CVEs* > > 4.1. Have all dependencies introduced by this merge been checked for known > issues? > > > > > -- > > > *GA Readiness Security Audit* > At this point, we are merging full or partial security model > implementations. > Let's inventory what is covered by the model at this point and whether > there are future merges required to be full. > > *1. UIs* > > 1.1. What sort of validation is being done on any accepted user input? > (pointers to code would be appreciated) > 1.2. What explicit protections have been built in for (pointers to code > would be appreciated): > 1.2.1. cross site scripting > 1.2.2. cross site request forgery > 1.2.3. click jacking (X-Frame-Options) > 1.3. What sort of authentication is required for access to the UIs? > 1.3.1. Kerberos > 1.3.1.1. has TGT renewal been accounted for > 1.3.1.2. SPNEGO support? > 1.3.1.3. Delegation token? > 1.3.2. Proxy User ACL? > 1.4. What authorization is available for determining who can access what > capabilities of the UIs for either viewing, modifying data and/or related > processes? > 1.5. Is there any input that will ultimately be persisted in configuration > for executing shell commands or processes? > 1.6. Do the UIs support the trusted proxy pattern with doas impersonation? > 1.7. Is there TLS/SSL support? > > *2. REST APIs* > > 2.1. Do the REST APIs support the trusted proxy pattern with doas > impersonation capabilities? > 2.2. What explicit protections have been built in for: > 2.2.1. cross site scripting (XSS) > 2.2.2. cross site request forgery (CSRF) > 2.2.3. XML External Entity (XXE) > 2.3. What is being used for authentication - Hadoop Auth Module? > 2.4. Are there separate processes for the HTTP resources (UIs and REST > endpoints) or are they part of existing processes? > 2.5. Is there TLS/SSL support? > 2.6. Are there new CLI commands and/or clients for accessing the REST APIs? > 2.7. What authorization enforcement points are there within the REST APIs? > > *3. Encryption* > > 3.1. Is there any support for encryption of persisted data? > 3.2. If so, is KMS and the hadoop key command used for key management? > 3.3. KMS interaction with Proxy Users? > > *4. Configuration* > > 4.1. Are there any passwords or secrets being added to configuration? > 4.2. If so, are they accessed via Configuration.getPassword() to allow for > provisioning to credential providers? > 4.3. Are there any settings that are used to launch docker containers or > shell out command execution, etc? > > *5. HA* > > 5.1. Are there provisions for HA? > 5.2. Are there any single point of failures? > > *6. CVEs* > > Dependencies need to have been checked for known issues before we merge. > We don't however want to list any CVEs that have been fixed but not > released yet. > > 6.1. All dependencies checked for CVEs? > > > > > On Sat, Oct 21, 2017 at 10:26 AM, larry mccay wrote: > >> Hi Marton - >> >> I don't think there is any denying that it would be great to have such >> documentation for all of those reasons. >> If it is a natural extension of getting the checklist information as an >> assertion of security state when merging then we can certainly include it. >> >>
Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/4/ No changes [Error replacing 'FILE' - Workspace is not accessible] - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK9 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/ No changes -1 overall The following subsystems voted -1: compile findbugs mvninstall mvnsite shadedclient unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace Specific tests: Failed junit tests : hadoop.security.authentication.util.TestKerberosName hadoop.security.authentication.server.TestAltKerberosAuthenticationHandler hadoop.security.authentication.client.TestKerberosAuthenticator hadoop.minikdc.TestChangeOrgNameAndDomain hadoop.minikdc.TestMiniKdc mvninstall: https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/patch-mvninstall-root.txt [1.6M] compile: https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/patch-compile-root.txt [48K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/patch-compile-root.txt [48K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/patch-compile-root.txt [48K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/diff-checkstyle-root.txt [3.1M] mvnsite: https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/patch-mvnsite-root.txt [44K] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/diff-patch-shellcheck.txt [20K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/whitespace-eol.txt [8.5M] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/whitespace-tabs.txt [292K] findbugs: https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-common-project_hadoop-annotations.txt [288K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-common-project_hadoop-auth.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-common-project_hadoop-auth-examples.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-common-project_hadoop-kms.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-common-project_hadoop-minikdc.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-common-project_hadoop-nfs.txt [4.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-client.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-nfs.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [36K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-common.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs-plugins.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java9-linux-x86/3/artifact/out/branch-findbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [20K]
[jira] [Created] (YARN-7379) Moving logging APIs over to slf4j in hadoop-yarn-client
Yeliang Cang created YARN-7379: -- Summary: Moving logging APIs over to slf4j in hadoop-yarn-client Key: YARN-7379 URL: https://issues.apache.org/jira/browse/YARN-7379 Project: Hadoop YARN Issue Type: Sub-task Reporter: Yeliang Cang Assignee: Yeliang Cang -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org