Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-01-24 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/667/

[Jan 23, 2018 5:09:11 PM] (stevel) HADOOP-15185. Update adls connector to use 
the current version of ADLS
[Jan 23, 2018 6:53:27 PM] (jianhe) YARN-7766. Introduce a new config property 
for YARN Service dependency
[Jan 23, 2018 10:03:53 PM] (jianhe) YARN-7782. Enable user re-mapping for 
Docker containers in
[Jan 24, 2018 1:54:39 AM] (billie) YARN-7540 and YARN-7605. Convert yarn app 
cli to call yarn api services
[Jan 24, 2018 2:43:36 AM] (yqlin) HDFS-12963. Error log level in 
ShortCircuitRegistry#removeShm.
[Jan 24, 2018 3:15:44 AM] (inigoiri) HDFS-12772. RBF: Federation Router State 
State Store internal API.
[Jan 24, 2018 5:07:05 AM] (szegedim) YARN-7796. Container-executor fails with 
segfault on certain OS
[Jan 24, 2018 9:34:15 AM] (rohithsharmaks) Revert "YARN-7537 [Atsv2] load hbase 
configuration from filesystem
[Jan 24, 2018 10:26:59 AM] (rohithsharmaks) YARN-7749. [UI2] GPU information 
tab in left hand side disappears when
[Jan 24, 2018 11:26:13 AM] (sunilg) YARN-7806. Distributed Shell should use 
timeline async api's.




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.hdfs.TestReadStripedFileWithMissingBlocks 
   hadoop.hdfs.TestLeaseRecovery2 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.yarn.server.nodemanager.TestNMAuditLogger 
   
hadoop.yarn.server.nodemanager.containermanager.linux.runtime.TestDockerContainerRuntime
 
   hadoop.yarn.server.nodemanager.webapp.TestNMWebServicesApps 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.nodemanager.webapp.TestNMWebServices 
   hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage 
   hadoop.yarn.server.nodemanager.TestNodeManagerReboot 
   hadoop.yarn.server.nodemanager.webapp.TestNMWebServicesContainers 
   
hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing
 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapreduce.lib.output.TestJobOutputCommitter 
   hadoop.mapreduce.v2.TestMROldApiJobs 
   hadoop.mapreduce.v2.TestUberAM 
   hadoop.mapred.TestMRTimelineEventHandling 
   hadoop.mapred.TestJobCleanup 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/667/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/667/artifact/out/diff-compile-javac-root.txt
  [280K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/667/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/667/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/667/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/667/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/667/artifact/out/whitespace-eol.txt
  [9.2M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/667/artifact/out/whitespace-tabs.txt
  [292K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/667/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/667/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/667/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [252K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/667/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [304K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/667/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [16K]
   

[jira] [Resolved] (YARN-7759) [UI2]GPU chart shows as "Available: 0" even though GPU is available

2018-01-24 Thread Wangda Tan (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wangda Tan resolved YARN-7759.
--
Resolution: Duplicate

Duplicated by YARN-7817

> [UI2]GPU chart shows as "Available: 0" even though GPU is available
> ---
>
> Key: YARN-7759
> URL: https://issues.apache.org/jira/browse/YARN-7759
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Sumana Sathish
>Assignee: Vasudevan Skm
>Priority: Major
>
> GPU chart under Node Manager page shows as zero GPU's available even though 
> GPU s are present. Only when we click 'GPU Information' chart, it shows 
> correct GPU information



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-7817) Add Resource reference to RM's NodeInfo object so REST API can get non memory/vcore resource usages.

2018-01-24 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-7817:


 Summary: Add Resource reference to RM's NodeInfo object so REST 
API can get non memory/vcore resource usages.
 Key: YARN-7817
 URL: https://issues.apache.org/jira/browse/YARN-7817
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Sumana Sathish
Assignee: Wangda Tan






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-7816) YARN Service - Two different users are unable to launch a service of the same name

2018-01-24 Thread Gour Saha (JIRA)
Gour Saha created YARN-7816:
---

 Summary: YARN Service - Two different users are unable to launch a 
service of the same name
 Key: YARN-7816
 URL: https://issues.apache.org/jira/browse/YARN-7816
 Project: Hadoop YARN
  Issue Type: Bug
  Components: applications
Reporter: Gour Saha


Now that YARN-7605 is committed, I am able to create a service in an unsecured 
cluster from cmd line as the logged in user. However when I login as a 
different user, I am unable to create a service of the exact same name. This 
feature should be supported in a multi-user setup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-7815) Mount the filecache as read-only in Docker containers

2018-01-24 Thread Shane Kumpf (JIRA)
Shane Kumpf created YARN-7815:
-

 Summary: Mount the filecache as read-only in Docker containers
 Key: YARN-7815
 URL: https://issues.apache.org/jira/browse/YARN-7815
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Shane Kumpf


Currently, when using the Docker runtime, the filecache directories are mounted 
read-write into the Docker containers. Read write access is not necessary. We 
should make this more restrictive by changing that mount to read-only.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-7814) Remove automatic mounting of cgroups into Docker containers

2018-01-24 Thread Shane Kumpf (JIRA)
Shane Kumpf created YARN-7814:
-

 Summary: Remove automatic mounting of cgroups into Docker 
containers
 Key: YARN-7814
 URL: https://issues.apache.org/jira/browse/YARN-7814
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Shane Kumpf


Currently, all Docker containers launched by {{DockerLinuxContainerRuntime}} 
get /sys/fs/cgroup automatically mounted. Now that user supplied mounts 
(YARN-5534) are in, containers that require this mount can request it (with a 
properly configured mount whitelist).

I propose we remove the automatic mounting of /sys/fs/cgroup into Docker 
containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-7813) Capacity Scheduler Intra-queue Preemption should be configurable for each queue

2018-01-24 Thread Eric Payne (JIRA)
Eric Payne created YARN-7813:


 Summary: Capacity Scheduler Intra-queue Preemption should be 
configurable for each queue
 Key: YARN-7813
 URL: https://issues.apache.org/jira/browse/YARN-7813
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: capacity scheduler, scheduler preemption
Affects Versions: 3.0.0, 2.8.3, 2.9.0
Reporter: Eric Payne
Assignee: Eric Payne


Just as inter-queue (a.k.a. cross-queue) preemption is configurable per queue, 
intra-queue (a.k.a. in-queue) preemption should be configurable per queue. If a 
queue does not have a setting for intra-queue preemption, it should inherit its 
parents value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-7812) Improvements to Rich Placement Constraints in YARN

2018-01-24 Thread Arun Suresh (JIRA)
Arun Suresh created YARN-7812:
-

 Summary: Improvements to Rich Placement Constraints in YARN
 Key: YARN-7812
 URL: https://issues.apache.org/jira/browse/YARN-7812
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Arun Suresh






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-5818) Support the Docker Live Restore feature

2018-01-24 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-5818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf resolved YARN-5818.
---
Resolution: Duplicate

> Support the Docker Live Restore feature
> ---
>
> Key: YARN-5818
> URL: https://issues.apache.org/jira/browse/YARN-5818
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Docker 1.12.x introduced the docker [Live 
> Restore|https://docs.docker.com/engine/admin/live-restore/] feature which 
> allows docker containers to survive docker daemon restarts/upgrades. Support 
> for this feature should be added to YARN to allow docker changes and upgrades 
> to be less impactful to existing containers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6305) Improve signaling of short lived containers

2018-01-24 Thread Shane Kumpf (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shane Kumpf resolved YARN-6305.
---
Resolution: Duplicate

> Improve signaling of short lived containers
> ---
>
> Key: YARN-6305
> URL: https://issues.apache.org/jira/browse/YARN-6305
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: yarn
>Reporter: Shane Kumpf
>Assignee: Shane Kumpf
>Priority: Major
>
> Currently it is possible for containers to leak and remain in an exited state 
> if a docker container is not fully started before being killed. Depending on 
> the selected Docker storage driver, the lower bound on starting a container 
> can be as much as three seconds (using {{docker run}}). If an implicit image 
> pull occurs, this could be much longer.
> When a container is not fully started, the PID is not available yet. As a 
> result, {{ContainerLaunch#cleanUpContainer}} will not signal the container as 
> it relies on the PID. The PID is not required for docker client operations, 
> so allowing the signaling to occur anyway appears to be appropriate.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-7811) Service AM should use configured default docker network

2018-01-24 Thread Billie Rinaldi (JIRA)
Billie Rinaldi created YARN-7811:


 Summary: Service AM should use configured default docker network
 Key: YARN-7811
 URL: https://issues.apache.org/jira/browse/YARN-7811
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Billie Rinaldi


Currently the DockerProviderService used by the Service AM hardcodes a default 
of bridge for the docker network. We already have a YARN configuration property 
for default network, so the Service AM should honor that.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-7810) TestDockerContainerRuntime test failures due to UID lookup of a non-existent user

2018-01-24 Thread Shane Kumpf (JIRA)
Shane Kumpf created YARN-7810:
-

 Summary: TestDockerContainerRuntime test failures due to UID 
lookup of a non-existent user
 Key: YARN-7810
 URL: https://issues.apache.org/jira/browse/YARN-7810
 Project: Hadoop YARN
  Issue Type: Bug
 Environment: YARN-7782 enabled the Docker runtime feature to remap the 
username to uid:gid form for launching Docker containers. The feature does an 
{{id -u}} and {{id -G}} to get the UID and GIDs. This fails with the test user, 
as that user doesn't actually exist on the host.
{code:java}
[ERROR] 
testContainerLaunchWithCustomNetworks(org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.TestDockerContainerRuntime)
  Time elapsed: 0.411 s  <<< ERROR!
org.apache.hadoop.yarn.server.nodemanager.containermanager.runtime.ContainerExecutionException:
 
ExitCodeException exitCode=1: id: 'run_as_user': no such user

at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.getUserIdInfo(DockerLinuxContainerRuntime.java:711)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.DockerLinuxContainerRuntime.launchContainer(DockerLinuxContainerRuntime.java:757)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.runtime.TestDockerContainerRuntime.testContainerLaunchWithCustomNetworks(TestDockerContainerRuntime.java:599){code}
Reporter: Shane Kumpf






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-7809) The log about the node status is changed from debug or info to warn when the node staus is unhealtyh

2018-01-24 Thread zhang.zhengxian (JIRA)
zhang.zhengxian created YARN-7809:
-

 Summary: The log about the node status is changed from debug or 
info to warn when the node staus is unhealtyh
 Key: YARN-7809
 URL: https://issues.apache.org/jira/browse/YARN-7809
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: nodemanager, resourcemanager
Reporter: zhang.zhengxian






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7808) Error displays while executing minor compaction on cluster

2018-01-24 Thread Vandana Yadav (JIRA)

 [ 
https://issues.apache.org/jira/browse/YARN-7808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vandana Yadav resolved YARN-7808.
-
Resolution: Fixed

> Error displays while executing minor compaction on cluster
> --
>
> Key: YARN-7808
> URL: https://issues.apache.org/jira/browse/YARN-7808
> Project: Hadoop YARN
>  Issue Type: Bug
>Reporter: Vandana Yadav
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-7808) Error displays while executing minor compaction on cluster

2018-01-24 Thread Vandana Yadav (JIRA)
Vandana Yadav created YARN-7808:
---

 Summary: Error displays while executing minor compaction on cluster
 Key: YARN-7808
 URL: https://issues.apache.org/jira/browse/YARN-7808
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Vandana Yadav






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2018-01-24 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/114/

No changes




-1 overall


The following subsystems voted -1:
asflicense unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Unreaped Processes :

   hadoop-common:1 
   hadoop-hdfs:42 
   bkjournal:1 
   hadoop-yarn-server-resourcemanager:1 
   hadoop-yarn-client:4 
   hadoop-yarn-applications-distributedshell:1 
   hadoop-mapreduce-client-jobclient:2 
   hadoop-distcp:3 
   hadoop-archives:1 
   hadoop-extras:1 

Failed junit tests :

   hadoop.hdfs.server.namenode.snapshot.TestSnapshottableDirListing 
   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
   hadoop.hdfs.server.balancer.TestBalancer 
   hadoop.hdfs.server.namenode.TestQuotaByStorageType 
   hadoop.hdfs.server.namenode.ha.TestHAMetrics 
   hadoop.hdfs.server.namenode.TestSecurityTokenEditLog 
   hadoop.hdfs.server.namenode.TestFileLimit 
   hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits 
   hadoop.hdfs.server.namenode.snapshot.TestCheckpointsWithSnapshots 
   hadoop.hdfs.server.namenode.TestFSImageWithAcl 
   hadoop.hdfs.server.federation.router.TestRouterRpc 
   hadoop.hdfs.server.namenode.TestFavoredNodesEndToEnd 
   hadoop.hdfs.server.namenode.TestListOpenFiles 
   hadoop.hdfs.server.namenode.ha.TestEditLogsDuringFailover 
   hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup 
   hadoop.hdfs.server.namenode.TestEditLogAutoroll 
   hadoop.hdfs.server.namenode.TestStreamFile 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.namenode.snapshot.TestDisallowModifyROSnapshot 
   hadoop.hdfs.server.namenode.TestDecommissioningStatus 
   hadoop.hdfs.server.namenode.TestAuditLogger 
   hadoop.hdfs.server.namenode.TestGenericJournalConf 
   hadoop.hdfs.server.namenode.TestTransferFsImage 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.namenode.TestAclConfigFlag 
   hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA 
   hadoop.hdfs.server.namenode.TestSecondaryWebUi 
   hadoop.hdfs.server.namenode.snapshot.TestNestedSnapshots 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap 
   hadoop.hdfs.server.namenode.ha.TestXAttrsWithHA 
   hadoop.hdfs.server.namenode.snapshot.TestXAttrWithSnapshot 
   hadoop.hdfs.server.namenode.web.resources.TestWebHdfsDataLocality 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.federation.router.TestNamenodeHeartbeat 
   hadoop.hdfs.server.namenode.TestCacheDirectives 
   hadoop.hdfs.server.namenode.TestProtectedDirectories 
   hadoop.hdfs.server.namenode.TestLargeDirectoryDelete 
   hadoop.hdfs.server.namenode.snapshot.TestAclWithSnapshot 
   hadoop.hdfs.server.namenode.TestSecondaryNameNodeUpgrade 
   hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshot 
   hadoop.hdfs.server.namenode.TestBackupNode 
   hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer 
   hadoop.hdfs.server.namenode.ha.TestHarFileSystemWithHA 
   hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination 
   hadoop.hdfs.server.federation.router.TestRouterMountTable 
   hadoop.hdfs.server.namenode.TestStartup 
   hadoop.hdfs.server.namenode.snapshot.TestFileContextSnapshot 
   
hadoop.hdfs.server.namenode.snapshot.TestSnapshotNameWithInvalidCharacters 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestHAStateTransitions 
   hadoop.hdfs.server.namenode.TestSaveNamespace 
   hadoop.hdfs.server.namenode.TestNameNodeRpcServerMethods 
   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot 
   hadoop.hdfs.server.federation.store.driver.TestStateStoreFileSystem 
   hadoop.hdfs.server.namenode.TestDeadDatanode 
   hadoop.hdfs.server.namenode.ha.TestEditLogTailer 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.server.namenode.ha.TestStateTransitionFailure 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDeletion 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotMetrics 
   hadoop.hdfs.server.namenode.snapshot.TestUpdatePipelineWithSnapshots 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.server.namenode.TestEditLogJournalFailures 
   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes 

[jira] [Created] (YARN-7807) By default do intra-app anti-affinity for scheduling request inside app placement allocator

2018-01-24 Thread Wangda Tan (JIRA)
Wangda Tan created YARN-7807:


 Summary: By default do intra-app anti-affinity for scheduling 
request inside app placement allocator
 Key: YARN-7807
 URL: https://issues.apache.org/jira/browse/YARN-7807
 Project: Hadoop YARN
  Issue Type: Sub-task
Reporter: Wangda Tan
Assignee: Wangda Tan


See discussion on: 
https://issues.apache.org/jira/browse/YARN-7791?focusedCommentId=16336857=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16336857

We need to make changes to AppPlacementAllocator to treat default target 
allocation tags is for intra-app.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-7806) DS will hang if ATSv2 back end is unavailable.

2018-01-24 Thread Rohith Sharma K S (JIRA)
Rohith Sharma K S created YARN-7806:
---

 Summary: DS will hang if ATSv2 back end is unavailable. 
 Key: YARN-7806
 URL: https://issues.apache.org/jira/browse/YARN-7806
 Project: Hadoop YARN
  Issue Type: Bug
 Environment: DS publishes container start/stop events using sync API. 
If back end is not down for some reasons, then DS will hang till container 
start/stop events are published. By default, retry is 30 and interval is 1sec.

To publish single entity using sync API will take 1 minutes to come out. In 
case of DS, if number of containers are 10 then 10minutes for start event and 
10minutes for stop event. Overall 20 minutes will wait.

 

DS should publish container events using asyn api.
Reporter: Rohith Sharma K S
Assignee: Rohith Sharma K S






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org