Re: Upgrading minimum version of Maven to 3.1 from 3.0
As I mentioned in YARN-6421. I'm +1 for upgrading to 3.1+ because the latest version of Maven 3.0.x is quite old. (4 years ago) We need to update dev-support/docker/Dockerfile to enable Maven 3.1+ for precommit Jenkins job. Regards, Akira On 2017/04/03 18:49, Sunil G wrote: Hi Folks, Recently we were doing build framework up-gradation for Yarn Ui. In order to compile yarn-ui on various architectures, we were using frontend-maven-plugin 0.0.22 version. However build is failing in *ppc64le.* If we could use latest version of frontend-maven-plugin, we could resolve this error. (such as using 1.1 version). But this requires maven version 3.1 minimum. YARN-6421 is tracking this issue, and we thought we can propose to upgrade to maven 3.1 Kindly share your thoughts. Thanks + Sunil - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-6434) When setting environment variables, can't use comma for a list of value in key = value pairs.
Jaeboo Jeong created YARN-6434: -- Summary: When setting environment variables, can't use comma for a list of value in key = value pairs. Key: YARN-6434 URL: https://issues.apache.org/jira/browse/YARN-6434 Project: Hadoop YARN Issue Type: Improvement Reporter: Jaeboo Jeong We can set environment variables using yarn.app.mapreduce.am.env, mapreduce.map.env, mapreduce.reduce.env. There is no problem if we use key=value pairs like X=Y, X=$Y. However If we want to set key=a list of value pair(e.g. X=Y,Z), we can’t. This is related to YARN-4595. The attached patch is based on YARN-3768. We can set environment variables like below. {code} mapreduce.map.env="YARN_CONTAINER_RUNTIME_TYPE=docker,YARN_CONTAINER_RUNTIME_DOCKER_IMAGE=hadoop-docker,YARN_CONTAINER_RUNTIME_DOCKER_LOCAL_RESOURCE_MOUNTS=\"/dir1:/targetdir1,/dir2:/targetdir2\"" {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-6433) Only accessible cgroup mount directories should be selected for a controller
Miklos Szegedi created YARN-6433: Summary: Only accessible cgroup mount directories should be selected for a controller Key: YARN-6433 URL: https://issues.apache.org/jira/browse/YARN-6433 Project: Hadoop YARN Issue Type: Bug Components: nodemanager Affects Versions: 3.0.0-alpha3 Reporter: Miklos Szegedi Assignee: Miklos Szegedi I have a Ubuntu16 box that returns the following error with pre-mounted cgroups on the latest trunk: {code} 2017-04-03 19:42:18,511 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsHandlerImpl: Cgroups not accessible /run/lxcfs/controllers/cpu,cpuacct {code} The version is: {code} $ uname -a Linux mybox 4.4.0-24-generic #43-Ubuntu SMP Wed Jun 8 19:27:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux {code} The following cpu cgroup filesystems are mounted: {code} $ grep cpuacct /etc/mtab cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct,nsroot=/ 0 0 cpu,cpuacct /run/lxcfs/controllers/cpu,cpuacct cgroup rw,relatime,cpu,cpuacct,nsroot=/ 0 0 {code} /sys/fs/cgroup is accessible to my yarn user, so it should be selected instead of /run/lxcfs/controllers -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-6432) FS preemption should reserve a node before considering containers on it for preemption
Miklos Szegedi created YARN-6432: Summary: FS preemption should reserve a node before considering containers on it for preemption Key: YARN-6432 URL: https://issues.apache.org/jira/browse/YARN-6432 Project: Hadoop YARN Issue Type: Bug Reporter: Miklos Szegedi Assignee: Miklos Szegedi -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Re: Automated documentation build for Apache Hadoop
Nice work Akira! Appreciate the help with trunk development. On Mon, Apr 3, 2017 at 1:56 AM, Akira Ajisakawrote: > Hi folks, > > I've created a repository to build and push Apache Hadoop document (trunk) > via Travis CI. > https://github.com/aajisaka/hadoop-document > > The document is updated daily by Travis CI cron job. > https://aajisaka.github.io/hadoop-document/hadoop-project/ > > Hope it helps! > > Regards, > Akira > > - > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > >
Re: [DISCUSS] Changing the default class path for clients
Thanks for digging that up. I agree with your analysis of our public documentation, though we still need a transition path. Officially, our classpath is not covered by compatibility, though we know that in reality, classpath changes are quite impactful to users. While we were having a related discussion on YARN container classpath isolation, the plan was to still provide the existing set of JARs by default, with applications having to explicitly opt-in to a clean classpath. This feels similar. How do you feel about providing e.g. `hadoop userclasspath` and `hadoop daemonclasspath`, and having `hadoop classpath` continue to default to `daemonclasspath` for now? We could then deprecate+remove `hadoop classpath` in a future release. On Mon, Apr 3, 2017 at 11:08 AM, Allen Wittenauerwrote: > > 1.0.4: > > "Prints the class path needed to get the Hadoop jar and the > required libraries.” > > 2.8.0 and 3.0.0: > > "Prints the class path needed to get the Hadoop jar and the > required libraries. If called without arguments, then prints the classpath > set up by the command scripts, which is likely to contain wildcards in the > classpath entries.” > > I would take that to mean “what gives me all the public APIs?” > Which, by definition, should all be in hadoop-client-runtime (with the > possible exception of the DistributedFileSystem Quota APIs, since for some > reason those are marked public.) > > Let me ask it a different way: > > Why should ‘yarn jar’, ‘mapred jar’, ‘hadoop distcp’, ‘hadoop fs’, > etc, etc, etc, have anything but hadoop-client-runtime as the provided jar? > Yes, some things might break, but given this is 3.0, some changes should be > expected anyway. Given the definition above "needed to get the Hadoop jar > and the required libraries” switching this over seems correct. > > > > On Apr 3, 2017, at 10:37 AM, Esteban Gutierrez > wrote: > > > > > > I agreed with Andrew too. Users have relied for years on `hadoop > classpath` for their script to launch jobs or other tools, perhaps no the > best idea to change the behavior without providing a proper deprecation > path. > > > > thanks! > > esteban. > > > > -- > > Cloudera, Inc. > > > > > > On Mon, Apr 3, 2017 at 10:26 AM, Andrew Wang > wrote: > > What's the current contract for `hadoop classpath`? Would it be safer to > > introduce `hadoop userclasspath` or similar for this behavior? > > > > I'm betting that changing `hadoop classpath` will lead to some breakages, > > so I'd prefer to make this new behavior opt-in. > > > > Best, > > Andrew > > > > On Mon, Apr 3, 2017 at 9:04 AM, Allen Wittenauer < > a...@effectivemachines.com> > > wrote: > > > > > > > > This morning I had a bit of a shower thought: > > > > > > With the new shaded hadoop client in 3.0, is there any reason > the > > > default classpath should remain the full blown jar list? e.g., > shouldn’t > > > ‘hadoop classpath’ just return configuration, user supplied bits (e.g., > > > HADOOP_USER_CLASSPATH, etc), HADOOP_OPTIONAL_TOOLS, and > > > hadoop-client-runtime? We’d obviously have to add some plumbing for > daemons > > > and the capability for the user to get the full list, but that should > be > > > trivial. > > > > > > Thoughts? > > > - > > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > > > > > > > > - > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org > >
Apache Hadoop 2.8.1 Release Plan
Hi all, We just released Apache Hadoop 2.8.0 recently [1] but it is not for production yet due to some issues identified. Now, we should work towards 2.8.1 release which aim for production deployment. The focus obviously is to fix blocker/critical issues [2], bug-fixes and *no* features / improvements. I plan to cut an RC in a month - target for releasing at mid of May, to give enough time for outstanding blocker / critical issues. Will start moving out any tickets that are not blockers and/or won't fit the timeline - there are 2 blockers and 9 critical tickets outstanding as of now. For progress of releasing effort, please refer our release wiki [3]. Please share thoughts if you have any. Thanks! Thanks, Junping [1] 2.8.0 release announcement: http://www.mail-archive.com/general@hadoop.apache.org/msg07443.html [2] 2.8.1 release Blockers/Criticals: https://s.apache.org/KGxC [3] 2.8 Release wiki: https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+2.8+Release
Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/ [Apr 3, 2017 4:06:54 AM] (aajisaka) MAPREDUCE-6824. TaskAttemptImpl#createCommonContainerLaunchContext is -1 overall The following subsystems voted -1: asflicense unit The following subsystems voted -1 but were configured to be filtered/ignored: cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.security.TestShellBasedUnixGroupsMapping hadoop.security.TestRaceWhenRelogin hadoop.fs.sftp.TestSFTPFileSystem hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure hadoop.hdfs.server.datanode.checker.TestThrottledAsyncCheckerTimeout hadoop.hdfs.server.namenode.ha.TestHAAppend hadoop.hdfs.TestReadStripedFileWithMissingBlocks hadoop.hdfs.server.datanode.checker.TestThrottledAsyncChecker hadoop.hdfs.server.datanode.TestDataNodeUUID hadoop.yarn.server.nodemanager.containermanager.TestContainerManager hadoop.yarn.server.resourcemanager.TestResourceTrackerService hadoop.yarn.server.resourcemanager.security.TestDelegationTokenRenewer hadoop.yarn.server.resourcemanager.TestRMAdminService hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.client.api.impl.TestAMRMClient hadoop.mapred.TestMRTimelineEventHandling hadoop.mapreduce.TestMRJobClient hadoop.tools.TestDistCpSystem Timed out junit tests : org.apache.hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStorePerf cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/diff-compile-cc-root.txt [4.0K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/diff-compile-javac-root.txt [184K] checkstyle: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/diff-checkstyle-root.txt [17M] pylint: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/diff-patch-pylint.txt [20K] shellcheck: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/diff-patch-shellcheck.txt [24K] shelldocs: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/diff-patch-shelldocs.txt [12K] whitespace: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/whitespace-eol.txt [12M] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/whitespace-tabs.txt [1.2M] javadoc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/diff-javadoc-javadoc-root.txt [2.2M] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt [148K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [444K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [36K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [60K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [324K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [88K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt [16K] asflicense: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/365/artifact/out/patch-asflicense-problems.txt [4.0K] Powered by Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le
For more details, see https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/ [Apr 3, 2017 4:06:54 AM] (aajisaka) MAPREDUCE-6824. TaskAttemptImpl#createCommonContainerLaunchContext is -1 overall The following subsystems voted -1: compile unit The following subsystems voted -1 but were configured to be filtered/ignored: cc javac The following subsystems are considered long running: (runtime bigger than 1h 0m 0s) unit Specific tests: Failed junit tests : hadoop.hdfs.TestEncryptedTransfer hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer hadoop.hdfs.server.mover.TestMover hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting hadoop.hdfs.web.TestWebHdfsTimeouts hadoop.yarn.server.timeline.TestRollingLevelDB hadoop.yarn.server.timeline.TestTimelineDataManager hadoop.yarn.server.timeline.TestLeveldbTimelineStore hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore hadoop.yarn.server.resourcemanager.TestRMRestart hadoop.yarn.server.TestMiniYarnClusterNodeUtilization hadoop.yarn.server.TestContainerManagerSecurity hadoop.yarn.client.api.impl.TestAMRMProxy hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore hadoop.yarn.applications.distributedshell.TestDistributedShell hadoop.mapred.TestShuffleHandler hadoop.mapreduce.v2.app.TestRuntimeEstimators hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService hadoop.mapreduce.TestMRJobClient Timed out junit tests : org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache compile: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-compile-root.txt [136K] cc: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-compile-root.txt [136K] javac: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-compile-root.txt [136K] unit: https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt [232K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt [52K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt [72K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt [324K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt [28K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt [12K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-ui.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt [8.0K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt [20K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-hs.txt [16K] https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/277/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt [88K]
[jira] [Created] (YARN-6431) make DELETE/STOP/CONVERT queues work in reservation system
Xuan Gong created YARN-6431: --- Summary: make DELETE/STOP/CONVERT queues work in reservation system Key: YARN-6431 URL: https://issues.apache.org/jira/browse/YARN-6431 Project: Hadoop YARN Issue Type: Sub-task Components: yarn Reporter: Xuan Gong Assignee: Xuan Gong Previous, we have made some enhancements on DELETE/STOP/CONVERT queues. We need to make sure that those enhancements work for reservation system as well. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-6430) Better unit test coverage required for SLS
Wangda Tan created YARN-6430: Summary: Better unit test coverage required for SLS Key: YARN-6430 URL: https://issues.apache.org/jira/browse/YARN-6430 Project: Hadoop YARN Issue Type: Sub-task Components: scheduler-load-simulator Reporter: Wangda Tan Assignee: Wangda Tan Now SLS has very limited unit test coverages. We need add more to make sure new changes will not likely cause regression. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Re: [DISCUSS] Changing the default class path for clients
1.0.4: "Prints the class path needed to get the Hadoop jar and the required libraries.” 2.8.0 and 3.0.0: "Prints the class path needed to get the Hadoop jar and the required libraries. If called without arguments, then prints the classpath set up by the command scripts, which is likely to contain wildcards in the classpath entries.” I would take that to mean “what gives me all the public APIs?” Which, by definition, should all be in hadoop-client-runtime (with the possible exception of the DistributedFileSystem Quota APIs, since for some reason those are marked public.) Let me ask it a different way: Why should ‘yarn jar’, ‘mapred jar’, ‘hadoop distcp’, ‘hadoop fs’, etc, etc, etc, have anything but hadoop-client-runtime as the provided jar? Yes, some things might break, but given this is 3.0, some changes should be expected anyway. Given the definition above "needed to get the Hadoop jar and the required libraries” switching this over seems correct. > On Apr 3, 2017, at 10:37 AM, Esteban Gutierrezwrote: > > > I agreed with Andrew too. Users have relied for years on `hadoop classpath` > for their script to launch jobs or other tools, perhaps no the best idea to > change the behavior without providing a proper deprecation path. > > thanks! > esteban. > > -- > Cloudera, Inc. > > > On Mon, Apr 3, 2017 at 10:26 AM, Andrew Wang wrote: > What's the current contract for `hadoop classpath`? Would it be safer to > introduce `hadoop userclasspath` or similar for this behavior? > > I'm betting that changing `hadoop classpath` will lead to some breakages, > so I'd prefer to make this new behavior opt-in. > > Best, > Andrew > > On Mon, Apr 3, 2017 at 9:04 AM, Allen Wittenauer > wrote: > > > > > This morning I had a bit of a shower thought: > > > > With the new shaded hadoop client in 3.0, is there any reason the > > default classpath should remain the full blown jar list? e.g., shouldn’t > > ‘hadoop classpath’ just return configuration, user supplied bits (e.g., > > HADOOP_USER_CLASSPATH, etc), HADOOP_OPTIONAL_TOOLS, and > > hadoop-client-runtime? We’d obviously have to add some plumbing for daemons > > and the capability for the user to get the full list, but that should be > > trivial. > > > > Thoughts? > > - > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > > - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Resolved] (YARN-4900) SLS MRAMSimulator should include scheduledMappers/Reducers when re-request failed tasks
[ https://issues.apache.org/jira/browse/YARN-4900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan resolved YARN-4900. -- Resolution: Duplicate This one is resolved with YARN-4779, we don't need to do anything here. > SLS MRAMSimulator should include scheduledMappers/Reducers when re-request > failed tasks > --- > > Key: YARN-4900 > URL: https://issues.apache.org/jira/browse/YARN-4900 > Project: Hadoop YARN > Issue Type: Sub-task > Components: scheduler-load-simulator >Reporter: Wangda Tan >Assignee: Wangda Tan > Labels: oct16-medium > Attachments: YARN-4900.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-6429) Revisit implementation of LocalitySchedulingPlacementSet to avoid invoke methods of AppSchedulingInfo
Wangda Tan created YARN-6429: Summary: Revisit implementation of LocalitySchedulingPlacementSet to avoid invoke methods of AppSchedulingInfo Key: YARN-6429 URL: https://issues.apache.org/jira/browse/YARN-6429 Project: Hadoop YARN Issue Type: Bug Components: scheduler Reporter: Wangda Tan Assignee: Wangda Tan An example is, LocalitySchedulingPlacementSet#decrementOutstanding: it calls appSchedulingInfo directly, which could potentially cause trouble since it tries to modify parent from child. Is it possible to move this logic to AppSchedulingInfo#allocate. Need to check other methods as well. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Re: [DISCUSS] Changing the default class path for clients
I agreed with Andrew too. Users have relied for years on `hadoop classpath` for their script to launch jobs or other tools, perhaps no the best idea to change the behavior without providing a proper deprecation path. thanks! esteban. -- Cloudera, Inc. On Mon, Apr 3, 2017 at 10:26 AM, Andrew Wangwrote: > What's the current contract for `hadoop classpath`? Would it be safer to > introduce `hadoop userclasspath` or similar for this behavior? > > I'm betting that changing `hadoop classpath` will lead to some breakages, > so I'd prefer to make this new behavior opt-in. > > Best, > Andrew > > On Mon, Apr 3, 2017 at 9:04 AM, Allen Wittenauer > > wrote: > > > > > This morning I had a bit of a shower thought: > > > > With the new shaded hadoop client in 3.0, is there any reason the > > default classpath should remain the full blown jar list? e.g., shouldn’t > > ‘hadoop classpath’ just return configuration, user supplied bits (e.g., > > HADOOP_USER_CLASSPATH, etc), HADOOP_OPTIONAL_TOOLS, and > > hadoop-client-runtime? We’d obviously have to add some plumbing for > daemons > > and the capability for the user to get the full list, but that should be > > trivial. > > > > Thoughts? > > - > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > > > > >
Re: [DISCUSS] Changing the default class path for clients
What's the current contract for `hadoop classpath`? Would it be safer to introduce `hadoop userclasspath` or similar for this behavior? I'm betting that changing `hadoop classpath` will lead to some breakages, so I'd prefer to make this new behavior opt-in. Best, Andrew On Mon, Apr 3, 2017 at 9:04 AM, Allen Wittenauerwrote: > > This morning I had a bit of a shower thought: > > With the new shaded hadoop client in 3.0, is there any reason the > default classpath should remain the full blown jar list? e.g., shouldn’t > ‘hadoop classpath’ just return configuration, user supplied bits (e.g., > HADOOP_USER_CLASSPATH, etc), HADOOP_OPTIONAL_TOOLS, and > hadoop-client-runtime? We’d obviously have to add some plumbing for daemons > and the capability for the user to get the full list, but that should be > trivial. > > Thoughts? > - > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org > For additional commands, e-mail: common-dev-h...@hadoop.apache.org > >
[DISCUSS] Changing the default class path for clients
This morning I had a bit of a shower thought: With the new shaded hadoop client in 3.0, is there any reason the default classpath should remain the full blown jar list? e.g., shouldn’t ‘hadoop classpath’ just return configuration, user supplied bits (e.g., HADOOP_USER_CLASSPATH, etc), HADOOP_OPTIONAL_TOOLS, and hadoop-client-runtime? We’d obviously have to add some plumbing for daemons and the capability for the user to get the full list, but that should be trivial. Thoughts? - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
Upgrading minimum version of Maven to 3.1 from 3.0
Hi Folks, Recently we were doing build framework up-gradation for Yarn Ui. In order to compile yarn-ui on various architectures, we were using frontend-maven-plugin 0.0.22 version. However build is failing in *ppc64le.* If we could use latest version of frontend-maven-plugin, we could resolve this error. (such as using 1.1 version). But this requires maven version 3.1 minimum. YARN-6421 is tracking this issue, and we thought we can propose to upgrade to maven 3.1 Kindly share your thoughts. Thanks + Sunil
Automated documentation build for Apache Hadoop
Hi folks, I've created a repository to build and push Apache Hadoop document (trunk) via Travis CI. https://github.com/aajisaka/hadoop-document The document is updated daily by Travis CI cron job. https://aajisaka.github.io/hadoop-document/hadoop-project/ Hope it helps! Regards, Akira - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
[jira] [Created] (YARN-6428) Queue AM limit is not honoured in cs
Bibin A Chundatt created YARN-6428: -- Summary: Queue AM limit is not honoured in cs Key: YARN-6428 URL: https://issues.apache.org/jira/browse/YARN-6428 Project: Hadoop YARN Issue Type: Bug Reporter: Bibin A Chundatt Assignee: Bibin A Chundatt Steps to reproduce Setup cluster with 40 GB and 40 vcores with 4 Node managers with 10 GB each. Configure 100% to default queue as capacity and max am limit as 10 % Minimum scheduler memory and vcore as 512,1 *Expected* AM limit 4096 and 4 vores *Actual* AM limit 4096+512 and 4+1 vcore -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org