[jira] [Commented] (MAPREDUCE-4168) Support multiple network interfaces
[ https://issues.apache.org/jira/browse/MAPREDUCE-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14260746#comment-14260746 ] Allen Wittenauer commented on MAPREDUCE-4168: - bq. I was under the impression that it was common practice to have a client configuration to submit jobs. If users are creating special *-site.xml files for clients for what are effectively server-side configurations, we need to find out why and fix it. > Support multiple network interfaces > --- > > Key: MAPREDUCE-4168 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-4168 > Project: Hadoop Map/Reduce > Issue Type: New Feature >Reporter: Tom White > > Umbrella jira to track the MapReduce side of HADOOP-8198. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6204) TestJobCounters should use new properties instead JobConf.MAPRED_TASK_JAVA_OPTS
[ https://issues.apache.org/jira/browse/MAPREDUCE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14260694#comment-14260694 ] sam liu commented on MAPREDUCE-6204: Gera, thanks for your comments and I will update the patch of MAPREDUCE-6205 according to your suggestion later. > TestJobCounters should use new properties instead > JobConf.MAPRED_TASK_JAVA_OPTS > --- > > Key: MAPREDUCE-6204 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6204 > Project: Hadoop Map/Reduce > Issue Type: Test > Components: test >Affects Versions: 2.6.0 >Reporter: sam liu >Assignee: sam liu >Priority: Minor > Attachments: MAPREDUCE-6204-1.patch, MAPREDUCE-6204.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (MAPREDUCE-5721) RM occur exception while unregistering
[ https://issues.apache.org/jira/browse/MAPREDUCE-5721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] stack resolved MAPREDUCE-5721. -- Resolution: Not a Problem Configuration issue. Resolving as 'not a problem'. > RM occur exception while unregistering > -- > > Key: MAPREDUCE-5721 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5721 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: resourcemanager >Affects Versions: 2.2.0 > Environment: rhel 5.8_x86 ;jrockit 1.6.0_31-R28.2.3-4.1.0;hadoop 2.2.0 >Reporter: chillon_m > > when i run WORDCOUNT EXAMPLES,it occur. > [hadoop@namenode0 ~]$ hadoop jar > hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar > wordcount /wordcount-input /wordcount-output > 14/01/10 14:42:58 WARN util.NativeCodeLoader: Unable to load native-hadoop > library for your platform... using builtin-java classes where applicable > 14/01/10 14:43:01 INFO client.RMProxy: Connecting to ResourceManager at > namenode0/192.168.0.133:8032 > 14/01/10 14:43:03 INFO input.FileInputFormat: Total input paths to process : 2 > 14/01/10 14:43:03 INFO mapreduce.JobSubmitter: number of splits:2 > 14/01/10 14:43:03 INFO Configuration.deprecation: user.name is deprecated. > Instead, use mapreduce.job.user.name > 14/01/10 14:43:03 INFO Configuration.deprecation: mapred.jar is deprecated. > Instead, use mapreduce.job.jar > 14/01/10 14:43:03 INFO Configuration.deprecation: mapred.output.value.class > is deprecated. Instead, use mapreduce.job.output.value.class > 14/01/10 14:43:03 INFO Configuration.deprecation: mapreduce.combine.class is > deprecated. Instead, use mapreduce.job.combine.class > 14/01/10 14:43:03 INFO Configuration.deprecation: mapreduce.map.class is > deprecated. Instead, use mapreduce.job.map.class > 14/01/10 14:43:03 INFO Configuration.deprecation: mapred.job.name is > deprecated. Instead, use mapreduce.job.name > 14/01/10 14:43:03 INFO Configuration.deprecation: mapreduce.reduce.class is > deprecated. Instead, use mapreduce.job.reduce.class > 14/01/10 14:43:03 INFO Configuration.deprecation: mapred.input.dir is > deprecated. Instead, use mapreduce.input.fileinputformat.inputdir > 14/01/10 14:43:03 INFO Configuration.deprecation: mapred.output.dir is > deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir > 14/01/10 14:43:03 INFO Configuration.deprecation: mapred.map.tasks is > deprecated. Instead, use mapreduce.job.maps > 14/01/10 14:43:03 INFO Configuration.deprecation: mapred.output.key.class is > deprecated. Instead, use mapreduce.job.output.key.class > 14/01/10 14:43:03 INFO Configuration.deprecation: mapred.working.dir is > deprecated. Instead, use mapreduce.job.working.dir > 14/01/10 14:43:03 INFO mapreduce.JobSubmitter: Submitting tokens for job: > job_1389336130986_0001 > 14/01/10 14:43:05 INFO impl.YarnClientImpl: Submitted application > application_1389336130986_0001 to ResourceManager at > namenode0/192.168.0.133:8032 > 14/01/10 14:43:05 INFO mapreduce.Job: The url to track the job: > http://namenode0:8088/proxy/application_1389336130986_0001/ > 14/01/10 14:43:05 INFO mapreduce.Job: Running job: job_1389336130986_0001 > 14/01/10 14:43:16 INFO mapreduce.Job: Job job_1389336130986_0001 running in > uber mode : false > 14/01/10 14:43:16 INFO mapreduce.Job: map 0% reduce 0% > 14/01/10 14:44:20 INFO mapreduce.Job: map 50% reduce 0% > 14/01/10 14:44:34 INFO mapreduce.Job: map 100% reduce 0% > 14/01/10 14:44:51 INFO mapreduce.Job: map 100% reduce 100% > 14/01/10 14:44:58 INFO ipc.Client: Retrying connect to server: > datanode0.hadoop/192.168.0.134:43052. Already tried 0 time(s); retry policy > is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1 SECONDS) > 14/01/10 14:44:59 INFO ipc.Client: Retrying connect to server: > datanode0.hadoop/192.168.0.134:43052. Already tried 1 time(s); retry policy > is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1 SECONDS) > 14/01/10 14:45:00 INFO ipc.Client: Retrying connect to server: > datanode0.hadoop/192.168.0.134:43052. Already tried 2 time(s); retry policy > is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1 SECONDS) > 14/01/10 14:45:01 INFO ipc.Client: Retrying connect to server: > datanode0.hadoop/192.168.0.134:43052. Already tried 0 time(s); retry policy > is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1 SECONDS) > 14/01/10 14:45:02 INFO ipc.Client: Retrying connect to server: > datanode0.hadoop/192.168.0.134:43052. Already tried 1 time(s); retry policy > is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1 SECONDS) > 14/01/10 14:45:03 INFO ipc.Client: Retrying connect to server: > datanode0.hadoop/192.168.0.134:43052. Already tried 2 time(s); retry policy > is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sle
[jira] [Commented] (MAPREDUCE-5799) add default value of MR_AM_ADMIN_USER_ENV
[ https://issues.apache.org/jira/browse/MAPREDUCE-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14260359#comment-14260359 ] Hadoop QA commented on MAPREDUCE-5799: -- {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12635312/MAPREDUCE-5799.diff against trunk revision 241d3b3. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/5096//console This message is automatically generated. > add default value of MR_AM_ADMIN_USER_ENV > - > > Key: MAPREDUCE-5799 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5799 > Project: Hadoop Map/Reduce > Issue Type: Improvement >Affects Versions: 2.3.0 >Reporter: Liyin Liang >Assignee: Liyin Liang > Attachments: MAPREDUCE-5799.diff > > > Submit a 1 map + 1 reduce sleep job with the following config: > {code} > > mapreduce.map.output.compress > true > > > mapreduce.map.output.compress.codec > org.apache.hadoop.io.compress.SnappyCodec > > > mapreduce.job.ubertask.enable > true > > {code} > And the LinuxContainerExecutor is enable on NodeManager. > This job will fail with the following error: > {code} > 2014-03-18 21:28:20,153 FATAL [uber-SubtaskRunner] > org.apache.hadoop.mapred.LocalContainerLauncher: Error running local > (uberized) 'child' : java.lang.UnsatisfiedLinkError: > org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z > at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native > Method) > at > org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63) > at > org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:132) > at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:148) > at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:163) > at org.apache.hadoop.mapred.IFile$Writer.(IFile.java:115) > at > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1583) > at > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1462) > at > org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:700) > at org.apache.hadoop.mapred.MapTask.closeQuietly(MapTask.java:1990) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:774) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at > org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.runSubtask(LocalContainerLauncher.java:317) > at > org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.run(LocalContainerLauncher.java:232) > at java.lang.Thread.run(Thread.java:662) > {code} > When create a ContainerLaunchContext for task in > TaskAttemptImpl.createCommonContainerLaunchContext(), the > DEFAULT_MAPRED_ADMIN_USER_ENV which is > "LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native" is added to the environment. > Where when create a ContainerLaunchContext for mrappmaster in > YARNRunner.createApplicationSubmissionContext(), there is no default > environment. So the ubermode job fails to find native lib. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-5799) add default value of MR_AM_ADMIN_USER_ENV
[ https://issues.apache.org/jira/browse/MAPREDUCE-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14260354#comment-14260354 ] Rajat Jain commented on MAPREDUCE-5799: --- +1 on this issue. This issue is actually critical when a user submits an uber mapreduce job with snappy compression. Things fail. > add default value of MR_AM_ADMIN_USER_ENV > - > > Key: MAPREDUCE-5799 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5799 > Project: Hadoop Map/Reduce > Issue Type: Improvement >Affects Versions: 2.3.0 >Reporter: Liyin Liang >Assignee: Liyin Liang >Priority: Minor > Attachments: MAPREDUCE-5799.diff > > > Submit a 1 map + 1 reduce sleep job with the following config: > {code} > > mapreduce.map.output.compress > true > > > mapreduce.map.output.compress.codec > org.apache.hadoop.io.compress.SnappyCodec > > > mapreduce.job.ubertask.enable > true > > {code} > And the LinuxContainerExecutor is enable on NodeManager. > This job will fail with the following error: > {code} > 2014-03-18 21:28:20,153 FATAL [uber-SubtaskRunner] > org.apache.hadoop.mapred.LocalContainerLauncher: Error running local > (uberized) 'child' : java.lang.UnsatisfiedLinkError: > org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z > at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native > Method) > at > org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63) > at > org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:132) > at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:148) > at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:163) > at org.apache.hadoop.mapred.IFile$Writer.(IFile.java:115) > at > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1583) > at > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1462) > at > org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:700) > at org.apache.hadoop.mapred.MapTask.closeQuietly(MapTask.java:1990) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:774) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at > org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.runSubtask(LocalContainerLauncher.java:317) > at > org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.run(LocalContainerLauncher.java:232) > at java.lang.Thread.run(Thread.java:662) > {code} > When create a ContainerLaunchContext for task in > TaskAttemptImpl.createCommonContainerLaunchContext(), the > DEFAULT_MAPRED_ADMIN_USER_ENV which is > "LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native" is added to the environment. > Where when create a ContainerLaunchContext for mrappmaster in > YARNRunner.createApplicationSubmissionContext(), there is no default > environment. So the ubermode job fails to find native lib. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (MAPREDUCE-5799) add default value of MR_AM_ADMIN_USER_ENV
[ https://issues.apache.org/jira/browse/MAPREDUCE-5799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajat Jain updated MAPREDUCE-5799: -- Priority: Major (was: Minor) > add default value of MR_AM_ADMIN_USER_ENV > - > > Key: MAPREDUCE-5799 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-5799 > Project: Hadoop Map/Reduce > Issue Type: Improvement >Affects Versions: 2.3.0 >Reporter: Liyin Liang >Assignee: Liyin Liang > Attachments: MAPREDUCE-5799.diff > > > Submit a 1 map + 1 reduce sleep job with the following config: > {code} > > mapreduce.map.output.compress > true > > > mapreduce.map.output.compress.codec > org.apache.hadoop.io.compress.SnappyCodec > > > mapreduce.job.ubertask.enable > true > > {code} > And the LinuxContainerExecutor is enable on NodeManager. > This job will fail with the following error: > {code} > 2014-03-18 21:28:20,153 FATAL [uber-SubtaskRunner] > org.apache.hadoop.mapred.LocalContainerLauncher: Error running local > (uberized) 'child' : java.lang.UnsatisfiedLinkError: > org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z > at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native > Method) > at > org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63) > at > org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:132) > at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:148) > at > org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:163) > at org.apache.hadoop.mapred.IFile$Writer.(IFile.java:115) > at > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1583) > at > org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1462) > at > org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:700) > at org.apache.hadoop.mapred.MapTask.closeQuietly(MapTask.java:1990) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:774) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340) > at > org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.runSubtask(LocalContainerLauncher.java:317) > at > org.apache.hadoop.mapred.LocalContainerLauncher$SubtaskRunner.run(LocalContainerLauncher.java:232) > at java.lang.Thread.run(Thread.java:662) > {code} > When create a ContainerLaunchContext for task in > TaskAttemptImpl.createCommonContainerLaunchContext(), the > DEFAULT_MAPRED_ADMIN_USER_ENV which is > "LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native" is added to the environment. > Where when create a ContainerLaunchContext for mrappmaster in > YARNRunner.createApplicationSubmissionContext(), there is no default > environment. So the ubermode job fails to find native lib. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (MAPREDUCE-4168) Support multiple network interfaces
[ https://issues.apache.org/jira/browse/MAPREDUCE-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karthik Kambatla reassigned MAPREDUCE-4168: --- Assignee: (was: Karthik Kambatla) Thanks for following up, Allen and Avner. I was under the impression that it was common practice to have a client configuration to submit jobs. Marking this unassigned so someone else can work on it. [~avnerb] - not sure if it is straight-forward to let the AMs talk to RM on all interfaces. IIUC, the AM will talk to either the primary interface (hostname) or a specified interface. > Support multiple network interfaces > --- > > Key: MAPREDUCE-4168 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-4168 > Project: Hadoop Map/Reduce > Issue Type: New Feature >Reporter: Tom White > > Umbrella jira to track the MapReduce side of HADOOP-8198. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (MAPREDUCE-4633) history server doesn't set permissions on all subdirs
[ https://issues.apache.org/jira/browse/MAPREDUCE-4633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] oss.wakayama reassigned MAPREDUCE-4633: --- Assignee: oss.wakayama (was: Thomas Graves) > history server doesn't set permissions on all subdirs > -- > > Key: MAPREDUCE-4633 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-4633 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: jobhistoryserver >Affects Versions: 0.23.3, 3.0.0, 2.0.2-alpha >Reporter: Thomas Graves >Assignee: oss.wakayama >Priority: Critical > Fix For: 0.23.3, 2.0.2-alpha > > Attachments: MAPREDUCE-4633.patch > > > The job history server creates a bunch of subdirectories under the "done" > directory. They are like 2012/09/03/00. It only sets the permissions on > the last one, ie 00 to 770.So the 2012/09/03 aren't explicitly set so > if the umask is more restrictive, they won't be set as it expects. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6205) Update the value of the new version properties of the deprecated property "mapred.child.java.opts"
[ https://issues.apache.org/jira/browse/MAPREDUCE-6205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14260081#comment-14260081 ] Gera Shegalov commented on MAPREDUCE-6205: -- [~sam liu], thanks for the patch. # Please follow the guideline for including a version in patch names under https://wiki.apache.org/hadoop/HowToContribute#Naming_your_patch. # The deprecation delta should be " mapred.child.java.opts" -> { "mapred.map.child.java.opts", "mapred.reduce.child.java.opts" }. Otherwise you introduce non-determinism due to existing deprecation deltas "mapred.map|reduce.child.java.opts" -> "mapreduce.map|reduce.java.opts" # mapred-default.xml should be updated. Currently mapreduce.map|reduce.java.opts is commented out. Instead both should have thw substitution value: {{$\{mapred.child.java.opts\}}} > Update the value of the new version properties of the deprecated property > "mapred.child.java.opts" > -- > > Key: MAPREDUCE-6205 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6205 > Project: Hadoop Map/Reduce > Issue Type: Bug > Components: mrv2 >Affects Versions: trunk >Reporter: sam liu >Assignee: sam liu >Priority: Minor > Attachments: MAPREDUCE-6205.patch, MAPREDUCE-6205.patch > > > In current hadoop code, the old property "mapred.child.java.opts" is > deprecated and its new versions are MRJobConfig.MAP_JAVA_OPTS and > MRJobConfig.REDUCE_JAVA_OPTS. However, when user set a value to the > deprecated property "mapred.child.java.opts", hadoop won't automatically > update its new versions properties > MRJobConfig.MAP_JAVA_OPTS("mapreduce.map.java.opts") and > MRJobConfig.REDUCE_JAVA_OPTS("mapreduce.reduce.java.opts"). As hadoop will > update the new version properties for many other deprecated properties, we > also should support such feature on the old property > "mapred.child.java.opts", otherwise it might bring some imcompatible issues. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6204) TestJobCounters should use new properties instead JobConf.MAPRED_TASK_JAVA_OPTS
[ https://issues.apache.org/jira/browse/MAPREDUCE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14260061#comment-14260061 ] Gera Shegalov commented on MAPREDUCE-6204: -- bq. I added following code to get the property value during map/reduce task execution in TestJobCounters#MemoryLoaderMapper#configure() ... These properties are actually meant to be used by the MRAppMaster. So although they are accessible in task attempt JVM's, it's not really correct to verify them there when they play no role. > TestJobCounters should use new properties instead > JobConf.MAPRED_TASK_JAVA_OPTS > --- > > Key: MAPREDUCE-6204 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6204 > Project: Hadoop Map/Reduce > Issue Type: Test > Components: test >Affects Versions: 2.6.0 >Reporter: sam liu >Assignee: sam liu >Priority: Minor > Attachments: MAPREDUCE-6204-1.patch, MAPREDUCE-6204.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-6204) TestJobCounters should use new properties instead JobConf.MAPRED_TASK_JAVA_OPTS
[ https://issues.apache.org/jira/browse/MAPREDUCE-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14260057#comment-14260057 ] Gera Shegalov commented on MAPREDUCE-6204: -- You are correct that mapred.child.java.opts should be added to the deprecation deltas. Let us continue working on this in MAPREDUCE-6205. However, this is not the root cause for the failure you are seeing on your ppc64. You are just working around the issue that you somewhere have this config that definitely does not belong to a clean build: {code} -Xmx1000m -Xms1000m -Xmn100m -Xtune:virtualized -Xshareclasses:name=mrscc_%g,groupAccess,cacheDir=/var/hadoop/tmp,nonFatal -Xscmx20m -Xdump:java:file=/var/hadoop/tmp/javacore.%Y%m%d.%H%M%S.%pid.%seq.txt -Xdump:heap:file=/var/hadoop/tmp/heapdump.%Y%m%d.%H%M%S.%pid.%seq.phd {code} > TestJobCounters should use new properties instead > JobConf.MAPRED_TASK_JAVA_OPTS > --- > > Key: MAPREDUCE-6204 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-6204 > Project: Hadoop Map/Reduce > Issue Type: Test > Components: test >Affects Versions: 2.6.0 >Reporter: sam liu >Assignee: sam liu >Priority: Minor > Attachments: MAPREDUCE-6204-1.patch, MAPREDUCE-6204.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (MAPREDUCE-4168) Support multiple network interfaces
[ https://issues.apache.org/jira/browse/MAPREDUCE-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14259934#comment-14259934 ] Avner BenHanoch commented on MAPREDUCE-4168: Thanks for re-opening. I also suffered from the bias of hadoop towards the main interface (that resolves the hostname) regardless of the interface I configured. > Support multiple network interfaces > --- > > Key: MAPREDUCE-4168 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-4168 > Project: Hadoop Map/Reduce > Issue Type: New Feature >Reporter: Tom White >Assignee: Karthik Kambatla > > Umbrella jira to track the MapReduce side of HADOOP-8198. -- This message was sent by Atlassian JIRA (v6.3.4#6332)