Re: Beginner Hadoop Code

2015-04-28 Thread Rudra Tripathy
Please check max temperature format
 On Apr 28, 2015 12:14 PM, "Anand Murali"  wrote:

> Dear All:
>
> I slightly modified Hadoop2.2 text book (Hadoop Definitive Guide) to suit
> 2.6. When I run though I get runtime exception.
>
> anand_vihar@Latitude-E5540:~/hadoop-2.6.0/input$ hadoop jar max.jar
> MaxTemperature /user/anand_vihar/max/input output
> 15/04/28 11:46:57 INFO Configuration.deprecation: session.id is
> deprecated. Instead, use dfs.metrics.session-id
> 15/04/28 11:46:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=JobTracker, sessionId=
> 15/04/28 11:46:57 WARN mapreduce.JobSubmitter: Hadoop command-line option
> parsing not performed. Implement the Tool interface and execute your
> application with ToolRunner to remedy this.
> 15/04/28 11:46:57 INFO input.FileInputFormat: Total input paths to process
> : 1
> 15/04/28 11:46:57 INFO mapreduce.JobSubmitter: number of splits:1
> 15/04/28 11:46:57 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_local117348325_0001
> 15/04/28 11:46:58 INFO mapreduce.Job: The url to track the job:
> http://localhost:8080/
> 15/04/28 11:46:58 INFO mapreduce.Job: Running job: job_local117348325_0001
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: OutputCommitter set in
> config null
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: OutputCommitter is
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: Waiting for map tasks
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: Starting task:
> attempt_local117348325_0001_m_00_0
> 15/04/28 11:46:58 INFO mapred.Task:  Using ResourceCalculatorProcessTree :
> [ ]
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: map task executor complete.
> 15/04/28 11:46:58 WARN mapred.LocalJobRunner: job_local117348325_0001
> java.lang.Exception: java.lang.RuntimeException:
> java.lang.NoSuchMethodException:
> MaxTemperature$MaxTemperatureMapper.()
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
> Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException:
> MaxTemperature$MaxTemperatureMapper.()
> at
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:742)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoSuchMethodException:
> MaxTemperature$MaxTemperatureMapper.()
> at java.lang.Class.getConstructor0(Class.java:2892)
> at java.lang.Class.getDeclaredConstructor(Class.java:2058)
> at
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:125)
> ... 8 more
> 15/04/28 11:46:59 INFO mapreduce.Job: Job job_local117348325_0001 running
> in uber mode : false
> 15/04/28 11:46:59 INFO mapreduce.Job:  map 0% reduce 0%
> 15/04/28 11:46:59 INFO mapreduce.Job: Job job_local117348325_0001 failed
> with state FAILED due to: NA
> 15/04/28 11:46:59 INFO mapreduce.Job: Counters: 0
>
> The job, mapper and reducer class is packaged in the jar before
> deployment. Can somebody here explain.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>


Re: Beginner Hadoop Code

2015-04-28 Thread Anand Murali
Rudra:
Request you to be more specific. Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, 
IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


 On Tuesday, April 28, 2015 1:14 PM, Rudra Tripathy  
wrote:
   

 Please check max temperature format
On Apr 28, 2015 12:14 PM, "Anand Murali"  wrote:

Dear All:
I slightly modified Hadoop2.2 text book (Hadoop Definitive Guide) to suit 2.6. 
When I run though I get runtime exception.
anand_vihar@Latitude-E5540:~/hadoop-2.6.0/input$ hadoop jar max.jar 
MaxTemperature /user/anand_vihar/max/input output
15/04/28 11:46:57 INFO Configuration.deprecation: session.id is deprecated. 
Instead, use dfs.metrics.session-id
15/04/28 11:46:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
processName=JobTracker, sessionId=
15/04/28 11:46:57 WARN mapreduce.JobSubmitter: Hadoop command-line option 
parsing not performed. Implement the Tool interface and execute your 
application with ToolRunner to remedy this.
15/04/28 11:46:57 INFO input.FileInputFormat: Total input paths to process : 1
15/04/28 11:46:57 INFO mapreduce.JobSubmitter: number of splits:1
15/04/28 11:46:57 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_local117348325_0001
15/04/28 11:46:58 INFO mapreduce.Job: The url to track the job: 
http://localhost:8080/
15/04/28 11:46:58 INFO mapreduce.Job: Running job: job_local117348325_0001
15/04/28 11:46:58 INFO mapred.LocalJobRunner: OutputCommitter set in config null
15/04/28 11:46:58 INFO mapred.LocalJobRunner: OutputCommitter is 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
15/04/28 11:46:58 INFO mapred.LocalJobRunner: Waiting for map tasks
15/04/28 11:46:58 INFO mapred.LocalJobRunner: Starting task: 
attempt_local117348325_0001_m_00_0
15/04/28 11:46:58 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
15/04/28 11:46:58 INFO mapred.LocalJobRunner: map task executor complete.
15/04/28 11:46:58 WARN mapred.LocalJobRunner: job_local117348325_0001
java.lang.Exception: java.lang.RuntimeException: 
java.lang.NoSuchMethodException: MaxTemperature$MaxTemperatureMapper.()
    at 
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException: 
MaxTemperature$MaxTemperatureMapper.()
    at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:742)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
    at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoSuchMethodException: 
MaxTemperature$MaxTemperatureMapper.()
    at java.lang.Class.getConstructor0(Class.java:2892)
    at java.lang.Class.getDeclaredConstructor(Class.java:2058)
    at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:125)
    ... 8 more
15/04/28 11:46:59 INFO mapreduce.Job: Job job_local117348325_0001 running in 
uber mode : false
15/04/28 11:46:59 INFO mapreduce.Job:  map 0% reduce 0%
15/04/28 11:46:59 INFO mapreduce.Job: Job job_local117348325_0001 failed with 
state FAILED due to: NA
15/04/28 11:46:59 INFO mapreduce.Job: Counters: 0

The job, mapper and reducer class is packaged in the jar before deployment. Can 
somebody here explain.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, 
IndiaPh: (044)- 28474593/ 43526162 (voicemail)


  

Re: Beginner Hadoop Code

2015-04-28 Thread Chris Mawata
Looks like the framework is having difficulty instantiating your Mapper.
The problem is probably because you made it an instance inner class. Make
it a static nested class
public static class MaxTemperatureMapper ...
and the same for your reducer

On Tue, Apr 28, 2015 at 4:27 AM, Anand Murali  wrote:

> Rudra:
>
> Request you to be more specific. Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 1:14 PM, Rudra Tripathy 
> wrote:
>
>
> Please check max temperature format
>  On Apr 28, 2015 12:14 PM, "Anand Murali"  wrote:
>
> Dear All:
>
> I slightly modified Hadoop2.2 text book (Hadoop Definitive Guide) to suit
> 2.6. When I run though I get runtime exception.
>
> anand_vihar@Latitude-E5540:~/hadoop-2.6.0/input$ hadoop jar max.jar
> MaxTemperature /user/anand_vihar/max/input output
> 15/04/28 11:46:57 INFO Configuration.deprecation: session.id is
> deprecated. Instead, use dfs.metrics.session-id
> 15/04/28 11:46:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=JobTracker, sessionId=
> 15/04/28 11:46:57 WARN mapreduce.JobSubmitter: Hadoop command-line option
> parsing not performed. Implement the Tool interface and execute your
> application with ToolRunner to remedy this.
> 15/04/28 11:46:57 INFO input.FileInputFormat: Total input paths to process
> : 1
> 15/04/28 11:46:57 INFO mapreduce.JobSubmitter: number of splits:1
> 15/04/28 11:46:57 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_local117348325_0001
> 15/04/28 11:46:58 INFO mapreduce.Job: The url to track the job:
> http://localhost:8080/
> 15/04/28 11:46:58 INFO mapreduce.Job: Running job: job_local117348325_0001
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: OutputCommitter set in
> config null
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: OutputCommitter is
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: Waiting for map tasks
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: Starting task:
> attempt_local117348325_0001_m_00_0
> 15/04/28 11:46:58 INFO mapred.Task:  Using ResourceCalculatorProcessTree :
> [ ]
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: map task executor complete.
> 15/04/28 11:46:58 WARN mapred.LocalJobRunner: job_local117348325_0001
> java.lang.Exception: java.lang.RuntimeException:
> java.lang.NoSuchMethodException:
> MaxTemperature$MaxTemperatureMapper.()
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
> Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException:
> MaxTemperature$MaxTemperatureMapper.()
> at
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:742)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoSuchMethodException:
> MaxTemperature$MaxTemperatureMapper.()
> at java.lang.Class.getConstructor0(Class.java:2892)
> at java.lang.Class.getDeclaredConstructor(Class.java:2058)
> at
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:125)
> ... 8 more
> 15/04/28 11:46:59 INFO mapreduce.Job: Job job_local117348325_0001 running
> in uber mode : false
> 15/04/28 11:46:59 INFO mapreduce.Job:  map 0% reduce 0%
> 15/04/28 11:46:59 INFO mapreduce.Job: Job job_local117348325_0001 failed
> with state FAILED due to: NA
> 15/04/28 11:46:59 INFO mapreduce.Job: Counters: 0
>
> The job, mapper and reducer class is packaged in the jar before
> deployment. Can somebody here explain.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>


Re: Beginner Hadoop Code

2015-04-28 Thread Anand Murali
Many thanks Chris. It works.
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, 
IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


 On Tuesday, April 28, 2015 4:53 PM, Chris Mawata  
wrote:
   

 Looks like the framework is having difficulty instantiating your Mapper. The 
problem is probably because you made it an instance inner class. Make it a 
static nested classpublic static class MaxTemperatureMapper ...and the same for 
your reducer
On Tue, Apr 28, 2015 at 4:27 AM, Anand Murali  wrote:

Rudra:
Request you to be more specific. Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, 
IndiaPh: (044)- 28474593/ 43526162 (voicemail) 


 On Tuesday, April 28, 2015 1:14 PM, Rudra Tripathy  
wrote:
   

 Please check max temperature format
On Apr 28, 2015 12:14 PM, "Anand Murali"  wrote:

Dear All:
I slightly modified Hadoop2.2 text book (Hadoop Definitive Guide) to suit 2.6. 
When I run though I get runtime exception.
anand_vihar@Latitude-E5540:~/hadoop-2.6.0/input$ hadoop jar max.jar 
MaxTemperature /user/anand_vihar/max/input output
15/04/28 11:46:57 INFO Configuration.deprecation: session.id is deprecated. 
Instead, use dfs.metrics.session-id
15/04/28 11:46:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
processName=JobTracker, sessionId=
15/04/28 11:46:57 WARN mapreduce.JobSubmitter: Hadoop command-line option 
parsing not performed. Implement the Tool interface and execute your 
application with ToolRunner to remedy this.
15/04/28 11:46:57 INFO input.FileInputFormat: Total input paths to process : 1
15/04/28 11:46:57 INFO mapreduce.JobSubmitter: number of splits:1
15/04/28 11:46:57 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_local117348325_0001
15/04/28 11:46:58 INFO mapreduce.Job: The url to track the job: 
http://localhost:8080/
15/04/28 11:46:58 INFO mapreduce.Job: Running job: job_local117348325_0001
15/04/28 11:46:58 INFO mapred.LocalJobRunner: OutputCommitter set in config null
15/04/28 11:46:58 INFO mapred.LocalJobRunner: OutputCommitter is 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
15/04/28 11:46:58 INFO mapred.LocalJobRunner: Waiting for map tasks
15/04/28 11:46:58 INFO mapred.LocalJobRunner: Starting task: 
attempt_local117348325_0001_m_00_0
15/04/28 11:46:58 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
15/04/28 11:46:58 INFO mapred.LocalJobRunner: map task executor complete.
15/04/28 11:46:58 WARN mapred.LocalJobRunner: job_local117348325_0001
java.lang.Exception: java.lang.RuntimeException: 
java.lang.NoSuchMethodException: MaxTemperature$MaxTemperatureMapper.()
    at 
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
    at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException: 
MaxTemperature$MaxTemperatureMapper.()
    at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:742)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
    at 
org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.run(FutureTask.java:262)
    at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoSuchMethodException: 
MaxTemperature$MaxTemperatureMapper.()
    at java.lang.Class.getConstructor0(Class.java:2892)
    at java.lang.Class.getDeclaredConstructor(Class.java:2058)
    at 
org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:125)
    ... 8 more
15/04/28 11:46:59 INFO mapreduce.Job: Job job_local117348325_0001 running in 
uber mode : false
15/04/28 11:46:59 INFO mapreduce.Job:  map 0% reduce 0%
15/04/28 11:46:59 INFO mapreduce.Job: Job job_local117348325_0001 failed with 
state FAILED due to: NA
15/04/28 11:46:59 INFO mapreduce.Job: Counters: 0

The job, mapper and reducer class is packaged in the jar before deployment. Can 
somebody here explain.
Thanks
 Anand Murali  11/7, 'Anand Vihar', Kandasamy St, MylaporeChennai - 600 004, 
IndiaPh: (044)- 28474593/ 43526162 (voicemail)


   



  

ApplicationMaster's logs are not available

2015-04-28 Thread Zoltán Zvara
Hi,

I'm writing my own AM, but YARN collects and shows no logs for its
container whatsoever. My AM runs forever and I would like to debug it, but
without logs I just can't do it. My AM uses org.slf4j.Logger for logging.
Other applications, like MR and Spark logs fine. Is there any option that I
need to set explicitly to enable logging? I've tried to fetch logs with and
without log-aggregation-enable, but without success.

I would appreciate any tips regarding this.

Thanks!

Zoltán Zvara


unsubscribe

2015-04-28 Thread Ram
unsubscribe


Re: unsubscribe

2015-04-28 Thread Ted Yu
Send email to user-unsubscr...@hadoop.apache.org

On Tue, Apr 28, 2015 at 7:30 AM, Ram  wrote:

> unsubscribe
>


Re: Beginner Hadoop Code

2015-04-28 Thread Chris Mawata
Great! Good luck with your exploration.
Chris
On Apr 28, 2015 9:30 AM, "Anand Murali"  wrote:

> Many thanks Chris. It works.
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 4:53 PM, Chris Mawata 
> wrote:
>
>
> Looks like the framework is having difficulty instantiating your Mapper.
> The problem is probably because you made it an instance inner class. Make
> it a static nested class
> public static class MaxTemperatureMapper ...
> and the same for your reducer
>
> On Tue, Apr 28, 2015 at 4:27 AM, Anand Murali 
> wrote:
>
> Rudra:
>
> Request you to be more specific. Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>   On Tuesday, April 28, 2015 1:14 PM, Rudra Tripathy 
> wrote:
>
>
> Please check max temperature format
>  On Apr 28, 2015 12:14 PM, "Anand Murali"  wrote:
>
> Dear All:
>
> I slightly modified Hadoop2.2 text book (Hadoop Definitive Guide) to suit
> 2.6. When I run though I get runtime exception.
>
> anand_vihar@Latitude-E5540:~/hadoop-2.6.0/input$ hadoop jar max.jar
> MaxTemperature /user/anand_vihar/max/input output
> 15/04/28 11:46:57 INFO Configuration.deprecation: session.id is
> deprecated. Instead, use dfs.metrics.session-id
> 15/04/28 11:46:57 INFO jvm.JvmMetrics: Initializing JVM Metrics with
> processName=JobTracker, sessionId=
> 15/04/28 11:46:57 WARN mapreduce.JobSubmitter: Hadoop command-line option
> parsing not performed. Implement the Tool interface and execute your
> application with ToolRunner to remedy this.
> 15/04/28 11:46:57 INFO input.FileInputFormat: Total input paths to process
> : 1
> 15/04/28 11:46:57 INFO mapreduce.JobSubmitter: number of splits:1
> 15/04/28 11:46:57 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_local117348325_0001
> 15/04/28 11:46:58 INFO mapreduce.Job: The url to track the job:
> http://localhost:8080/
> 15/04/28 11:46:58 INFO mapreduce.Job: Running job: job_local117348325_0001
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: OutputCommitter set in
> config null
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: OutputCommitter is
> org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: Waiting for map tasks
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: Starting task:
> attempt_local117348325_0001_m_00_0
> 15/04/28 11:46:58 INFO mapred.Task:  Using ResourceCalculatorProcessTree :
> [ ]
> 15/04/28 11:46:58 INFO mapred.LocalJobRunner: map task executor complete.
> 15/04/28 11:46:58 WARN mapred.LocalJobRunner: job_local117348325_0001
> java.lang.Exception: java.lang.RuntimeException:
> java.lang.NoSuchMethodException:
> MaxTemperature$MaxTemperatureMapper.()
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
> Caused by: java.lang.RuntimeException: java.lang.NoSuchMethodException:
> MaxTemperature$MaxTemperatureMapper.()
> at
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:131)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:742)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
> at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NoSuchMethodException:
> MaxTemperature$MaxTemperatureMapper.()
> at java.lang.Class.getConstructor0(Class.java:2892)
> at java.lang.Class.getDeclaredConstructor(Class.java:2058)
> at
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:125)
> ... 8 more
> 15/04/28 11:46:59 INFO mapreduce.Job: Job job_local117348325_0001 running
> in uber mode : false
> 15/04/28 11:46:59 INFO mapreduce.Job:  map 0% reduce 0%
> 15/04/28 11:46:59 INFO mapreduce.Job: Job job_local117348325_0001 failed
> with state FAILED due to: NA
> 15/04/28 11:46:59 INFO mapreduce.Job: Counters: 0
>
> The job, mapper and reducer class is packaged in the jar before
> deployment. Can somebody here explain.
>
> Thanks
>
> Anand Murali
> 11/7, 'Anand Vihar', Kandasamy St, Mylapore
> Chennai - 600 004, India
> Ph: (044)- 28474593/ 43526162 (voicemail)
>
>
>
>
>
>
>


Re: in YARN/MR2, can I still submit multiple jobs to one MR application master?

2015-04-28 Thread Yang
Vinod:

thanks.

the queue is the correct way to go. but a small technical issue is that in
our (as in most) environment, the most users share one "default" queue, and
more importantly, even if they have a special queue, the queue limit is
defined by ops team, not the user himself. i.e. he can not "self-police"
his queue usage: if he has a queue with max 100 tasks running, he can not
limit that down to 50


Yang

On Mon, Apr 27, 2015 at 5:07 PM, Vinod Kumar Vavilapalli <
vino...@hortonworks.com> wrote:

> The MapReduce ApplicationMaster supports only one job. You can say that
> (YARN ResourceManager + a bunch of MR ApplicationMasters (one per job) =
> JobTracker).
>
> Tez does have a notion of multiple DAGs per YARN app.
>
> For your specific use-case, you can force that user to a queue and limit
> how much he/she can access.
>
> Thanks
> +Vinod
>
> On Apr 27, 2015, at 3:30 PM, Yang  wrote:
>
> > conceptually, the MR application master is similar to the old job
> tracker.
> >
> > if so, can I submit multiple jobs to the same MR application master?  it
> looks like an odd use case, the context is that we have users generating
> lots of MR jobs, and he currently has a little crude scheduler that
> periodically launches jobs to the RM by just "hadoop jar ..."
> >
> > instead I was thinking to "carve out" a MR2 allocation in RM first, then
> periodically submit to the "job tracker"/application master, so that all
> the jobs are localized to this allocation.
> >
> >
> > I was also thinking about using Tez instead of MR application master.
> Tez replaces MR2 application master, not on top of it, right?
> >
> > Thanks
> > Yang
>
>


Re: default no of reducers

2015-04-28 Thread Nick Dimiduk
Please take this to user@hadoop.apache.org

On Tue, Apr 28, 2015 at 10:31 AM, Shushant Arora 
wrote:

> In Normal MR job can I configure ( cluster wide) default number of reducers
> - if I don't specify any reducers in my job.
>


Lifetime of jhist files

2015-04-28 Thread Kevin
Hi,

I am running CDH5.1.3 with YARN. The one issue I am having is that the
jhist files are being deleted too quickly. I
set yarn.nodemanager.delete.debug-delay-sec to seven days (in seconds, of
course), but I am noticing that the job historyserver is expiring any jhist
file that is seven days old. I thought
yarn.nodemanager.delete.debug-delay-sec was just to clean up local logs on
the NodeManager; I didn't think it also affected the job history files too.

Am I right in my assumptions? Is there anyway to extend the lifetime of the
job history files?

BTW, my history files are stored in /user/history/done///

Any feedback would be great.

Thanks,
Kevin


Re: Failed to run distcp against ftp server installed on Windows.

2015-04-28 Thread sam liu
for IIS ftp server on Windows, seems the distcp tool always failed on the
line 'client.setFileTransferMode(FTP.BLOCK_TRANSFER_MODE)' in
hadoop/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java#connect()

Opened a jira for this issue: HADOOP-11886

2015-04-27 16:36 GMT+08:00 sam liu :

> Hi Experts,
>
> It is really weird that DistCp could successfully get the file from
> FileZilla ftp server on Windows7, but failed from the IIS ftp server on the
> same Windows7 OS(but I can get file using wget directly: 'wget
> ftp://Viewer:passw...@hostname1.com:21/ftp_file1.txt' ). I tried several
> times, but all failed and encountered different error messages as below.
>
> Any comments?
>
> *[Success on FileZilla ftp server on Windows7]:*
> [h...@hostname2.com ~]$ hadoop distcp
> ftp://ftp:f...@hostname1.com:121/ftp_test.txt /tmp/
> 15/04/26 22:56:20 INFO tools.DistCp: Input Options:
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false,
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null',
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[
> ftp://ftp:f...@hostname1.com:121/ftp_test.txt], targetPath=/tmp,
> targetPathExists=true, preserveRawXattrs=false}
> 15/04/26 22:56:21 INFO impl.TimelineClientImpl: Timeline service address:
> http://hostname2.com:8188/ws/v1/timeline/
> 15/04/26 22:56:21 INFO client.RMProxy: Connecting to ResourceManager at
> hostname2.com/9.32.249.181:8050
> 15/04/26 22:56:43 INFO impl.TimelineClientImpl: Timeline service address:
> http://hostname2.com:8188/ws/v1/timeline/
> 15/04/26 22:56:43 INFO client.RMProxy: Connecting to ResourceManager at
> hostname2.com/9.32.249.181:8050
> 15/04/26 22:56:43 INFO mapreduce.JobSubmitter: number of splits:1
> 15/04/26 22:56:44 INFO mapreduce.JobSubmitter: Submitting tokens for job:
> job_1429858372957_0002
> 15/04/26 22:56:44 INFO impl.YarnClientImpl: Submitted application
> application_1429858372957_0002
> 15/04/26 22:56:44 INFO mapreduce.Job: The url to track the job:
> http://hostname2.com:8088/proxy/application_1429858372957_0002/
> 15/04/26 22:56:44 INFO tools.DistCp: DistCp job-id: job_1429858372957_0002
> 15/04/26 22:56:44 INFO mapreduce.Job: Running job: job_1429858372957_0002
> 15/04/26 22:56:51 INFO mapreduce.Job: Job job_1429858372957_0002 running
> in uber mode : false
> 15/04/26 22:56:51 INFO mapreduce.Job:  map 0% reduce 0%
>
> *[Failure 1 on  IIS ftp server on the same Windows7 OS] :*
> [h...@hostname2.com ~]$ hadoop distcp
> ftp://Viewer:passw...@hostname1.com:21/ftp_file1.txt /tmp/
> 15/04/27 00:02:45 INFO tools.DistCp: Input Options:
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false,
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null',
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[
> ftp://Viewer:passw...@hostname1.com:21/ftp_file1.txt], targetPath=/tmp,
> targetPathExists=true, preserveRawXattrs=false}
> 15/04/27 00:02:47 INFO impl.TimelineClientImpl: Timeline service address:
> http://hostname2.com:8188/ws/v1/timeline/
> 15/04/27 00:02:47 INFO client.RMProxy: Connecting to ResourceManager at
> hostname2.com/9.32.249.181:8050
> 15/04/27 00:03:50 ERROR tools.DistCp: Invalid input:
> org.apache.hadoop.tools.CopyListing$InvalidInputException:
> ftp://Viewer:passw...@hostname1.com:21/ftp_file1.txt doesn't exist
> at
> org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:84)
> at
> org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
> at
> org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:353)
> at org.apache.hadoop.tools.DistCp.execute(DistCp.java:160)
> at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.tools.DistCp.main(DistCp.java:401)
>
> *[Failure 2 on  IIS ftp server on the same Windows7 OS] :*
> [biad...@hostname2.com ~]$ hadoop distcp
> ftp://Viewer:passw0rd@9.126.146.71/ftp-win.txt /tmp/
> 15/02/01 23:03:37 INFO tools.DistCp: Input Options:
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false,
> ignoreFailures=false, maxMaps=20, sslConfigurationFile='null',
> copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[
> ftp://Viewer:passw0rd@9.126.146.71/ftp-win.txt], targetPath=/tmp,
> targetPathExists=true}
> 15/02/01 23:03:38 INFO client.RMProxy: Connecting to ResourceManager at
> hostname2.com/9.32.249.181:8032
> 15/02/01 23:05:50 ERROR tools.DistCp: Exception encountered
> org.apache.commons.net.ftp.FTPConnectionClosedException: Connection closed
> without indication.
> at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:313)
> at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:290)
> at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:479)
> at org.apache.commons.net.ftp.FTP.sendCommand(FTP.java:552)
> 

RE: ApplicationMaster's logs are not available

2015-04-28 Thread Naganarasimha G R (Naga)
Hi Zoltán Zvara,
 which version of yarn are you using ?

+ Naga

From: Zoltán Zvara [zoltan.zv...@gmail.com]
Sent: Tuesday, April 28, 2015 19:31
To: user@hadoop.apache.org
Subject: ApplicationMaster's logs are not available

Hi,

I'm writing my own AM, but YARN collects and shows no logs for its container 
whatsoever. My AM runs forever and I would like to debug it, but without logs I 
just can't do it. My AM uses org.slf4j.Logger for logging. Other applications, 
like MR and Spark logs fine. Is there any option that I need to set explicitly 
to enable logging? I've tried to fetch logs with and without 
log-aggregation-enable, but without success.

I would appreciate any tips regarding this.

Thanks!

Zoltán Zvara


RE: Lifetime of jhist files

2015-04-28 Thread Naganarasimha G R (Naga)
Hi Kevin,

Could check the below configuration for the job history server
mapreduce.jobhistory.max-age-ms 60480   Job history files older than 
this many milliseconds will be deleted when the history cleaner runs. Defaults 
to 60480 (1 week).

you could refer other job history configs in 
http://hadoop.apache.org/docs/r2.7.0/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml

+Naga


From: Kevin [kevin.macksa...@gmail.com]
Sent: Wednesday, April 29, 2015 01:11
To: user@hadoop.apache.org
Subject: Lifetime of jhist files

Hi,

I am running CDH5.1.3 with YARN. The one issue I am having is that the jhist 
files are being deleted too quickly. I set 
yarn.nodemanager.delete.debug-delay-sec to seven days (in seconds, of course), 
but I am noticing that the job historyserver is expiring any jhist file that is 
seven days old. I thought yarn.nodemanager.delete.debug-delay-sec was just to 
clean up local logs on the NodeManager; I didn't think it also affected the job 
history files too.

Am I right in my assumptions? Is there anyway to extend the lifetime of the job 
history files?

BTW, my history files are stored in /user/history/done///

Any feedback would be great.

Thanks,
Kevin