RE: Output Directory not getting created

2013-07-03 Thread Devaraj k
Hi Kasi, I think MapR mailing list is the better place to ask this question. Thanks Devaraj k From: Kasi Subrahmanyam [mailto:kasisubbu...@gmail.com] Sent: 04 July 2013 08:49 To: common-u...@hadoop.apache.org; mapreduce-user@hadoop.apache.org Subject: Output Directory not getting created Hi

RE: launch aggregatewordcount and sudoku in Yarn

2013-06-24 Thread Devaraj k
as argument to this. Thanks Devaraj K From: Pedro Sá da Costa [mailto:psdc1...@gmail.com] Sent: 21 June 2013 22:24 To: mapreduce-user Subject: launch aggregatewordcount and sudoku in Yarn How I run an aggregatewordcount and sudoku in Yarn? Do I need any input files, more exactly in Sudoku

RE: How run Aggregator wordcount?

2013-06-24 Thread Devaraj k
It doesn't accept multiple folders as input. You can have multiple files in a directory and same you can give as input. Thanks Devaraj K From: Pedro Sá da Costa [mailto:psdc1...@gmail.com] Sent: 22 June 2013 16:25 To: mapreduce-user Subject: How run Aggregator wordcount? Aggregator wordcount

RE: I just want the last 4 jobs in the job history in Yarn?

2013-06-18 Thread Devaraj k
Devaraj K From: Pedro Sá da Costa [mailto:psdc1...@gmail.com] Sent: 18 June 2013 12:35 To: mapreduce-user Subject: I just want the last 4 jobs in the job history in Yarn? Is it possible to say that I just want the last 4 jobs in the job history in Yarn? -- Best regards,

RE: Get the history info in Yarn

2013-06-12 Thread Devaraj K
Hi, You can get all the details for Job using this mapred command mapred job –status Job-ID For this you need to have Job History Server Running and the same job history server address configured in the client side. Thanks Regards Devaraj K From: Pedro Sá da Costa

RE: How To Distribute One Map Data To All Reduce Tasks?

2012-07-05 Thread Devaraj k
! But what I really want to know is how can I distribute one map data to every reduce task, not one of reduce tasks. Do you have some ideas? 发件人: Devaraj k [mailto:devara...@huawei.com] 发送时间: 2012年7月5日 12:12 收件人: mapreduce-user@hadoop.apache.org 主题: RE: How To Distribute One Map Data To All Reduce Tasks

RE: java.lang.NoClassDefFoundError: org/apache/hadoop/mapreduce/v2/app/MRAppMaste

2012-06-05 Thread Devaraj k
Can you check all the hadoop environment variables are set properly in which the app master is getting launching. If you are submitting from windows, this might be the issue https://issues.apache.org/jira/browse/MAPREDUCE-4052. Thanks Devaraj From:

RE: java.lang.NoClassDefFoundError: org/apache/hadoop/mapreduce/v2/app/MRAppMaste

2012-06-05 Thread Devaraj k
Is it expected to have these variables in profile file of the Linux user?? I am not using Windows client. My client is running on Mac and the cluster is running on Linux versions. Cheers, Subroto Sanyal On Jun 5, 2012, at 10:50 AM, Devaraj k wrote: Can you check all the hadoop environment

RE: java.lang.NoClassDefFoundError: org/apache/hadoop/mapreduce/v2/app/MRAppMaste

2012-06-05 Thread Devaraj k
:07 PM, Devaraj k wrote: Hi Subroto, It will not use yarn-env.sh for launching the application master. NM uses the environment set by the client for launching application master. Can you set the environment variables in /etc/profile or update the yarn application classpath

RE: cleanup of data when restarting Tasktracker of Hadoop

2012-05-29 Thread Devaraj k
What is the local directory you are using to store the data? Thanks Devaraj From: hadoop anis [hadoop.a...@gmail.com] Sent: Tuesday, May 29, 2012 12:29 PM To: mapreduce-user@hadoop.apache.org; mapreduce-...@hadoop.apache.org Subject: Re: cleanup of data

RE: cleanup of data when restarting Tasktracker of Hadoop

2012-05-29 Thread Devaraj k
[hadoop.a...@gmail.com] Sent: Tuesday, May 29, 2012 4:00 PM To: mapreduce-user@hadoop.apache.org Subject: Re: cleanup of data when restarting Tasktracker of Hadoop Thanks for Replying, I am using shared directory to store the data On 5/29/12, Devaraj k devara...@huawei.com wrote: What

RE: Getting filename in case of MultipleInputs

2012-05-03 Thread Devaraj k
Hi Subbu, I am not sure which input format you are using. If you are using FileInputFormat, you can get the file name this way in map function.. import org.apache.hadoop.mapred.FileSplit; import org.apache.hadoop.mapreduce.InputSplit; import org.apache.hadoop.mapreduce.Mapper; public

RE: What happens to blacklisted TaskTrackers?

2012-04-26 Thread Devaraj k
Hi Pedro, 1. If the Tast Tracker doesn't send heart beat for some time(i.e expiry interval), the the task tracker will be declared as lost tracker and not black listed task tracker. If many tasks are failing in the same Task Tracker for a job, then the TT will be black listed for job, if it

RE: How to get the HDFS I/O information

2012-04-25 Thread Devaraj k
Hi Qu, You can access the HDFS read/write bytes for each task or job level using the below counters. FileSystemCounters : HDFS_BYTES_READ FILE_BYTES_WRITTEN These can be accessed by using UI or API. Thanks Devaraj

RE: Reducer not firing

2012-04-17 Thread Devaraj k
, Devaraj k devara...@huawei.com wrote: Hi Arko, What is value of 'no_of_reduce_tasks'? If no of reduce tasks are 0, then the map task will directly write map output into the Job output path. Thanks Devaraj From: Arko Provo Mukherjee

RE: Reducer not firing

2012-04-16 Thread Devaraj k
Hi Arko, What is value of 'no_of_reduce_tasks'? If no of reduce tasks are 0, then the map task will directly write map output into the Job output path. Thanks Devaraj From: Arko Provo Mukherjee [arkoprovomukher...@gmail.com] Sent: Tuesday, April

RE: CompressionCodec in MapReduce

2012-04-11 Thread Devaraj k
Hi Grzegorz, You can find the below properties for Job input and output compression: The below prop is used by the codec factory. This codec will be taken based on the type(i.e suffix) of the file. By default the LineRecordReador which is used by FileInputFormat uses this. If you want the

RE: Including third party jar files in Map Reduce job

2012-04-04 Thread Devaraj k
Message- From: Devaraj k [mailto:devara...@huawei.commailto:devara...@huawei.com] Sent: Wednesday, April 04, 2012 12:35 PM To: mapreduce-user@hadoop.apache.orgmailto:mapreduce-user@hadoop.apache.org Subject: RE: Including third party jar files in Map Reduce job Hi Utkarsh, The usage of the jar

RE: Execution directory for child process within mapper

2011-09-26 Thread Devaraj k
Hi Joris, You cannot configure the work directory directly. You can configure the local directory with property 'mapred.local.dir', and it will be used further to create the work directory like '${mapred.local.dir}/taskTracker/jobcache/$jobid/$taskid/work'. Based on this, you can relatively

RE: A question about `mvn eclipse:eclipse`

2011-09-25 Thread Devaraj K
Hi Zhoujie, hadoop-yarn-common is failing to resolve hadoop-yarn-api jar file. Can you try executing install(mvn install -X) on hadoop-yarn-api and then continue with mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true -e. Devaraj K _ From: 周杰 [mailto:zhoujie

RE: Using HADOOP for Processing Videos

2011-09-19 Thread Devaraj K
can go through this link for more info on input format and out format. http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapred/In putFormat.html http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapred/Ou tputFormat.html Devaraj K

RE: No Mapper but Reducer

2011-09-07 Thread Devaraj K
Hi Bejoy, It is possible to execute a job with no mappers and reducers alone. You can try this by giving the empty directory as input for the job. Devaraj K _ From: Bejoy KS [mailto:bejoy.had...@gmail.com] Sent: Wednesday, September 07, 2011 1:30 PM To: mapreduce

RE: RE: Can I use the cores of each CPU to be the datanodes instead of CPU?

2011-08-08 Thread Devaraj K
by doing some changes in the script and configuration files. You can have a look into this, for what all changes need to do for starting multiple data nodes in a single machine. http://www.mail-archive.com/hdfs-user@hadoop.apache.org/msg01353.html Devaraj K _ From: 谭军 [mailto:tanjun_2

RE: hadoop job is run slow in multicluster configuration

2011-07-01 Thread Devaraj K
Can you check the logs in the task tracker machine, what is happening to the task execution and status of the task? Devaraj K - This e-mail and its attachments

RE: Map job hangs indefinitely

2011-06-22 Thread Devaraj K
With this info it is difficult to find out where the problem is coming. Can you check the job tracker and task tracker logs related to these jobs? Devaraj K _ From: Sudharsan Sampath [mailto:sudha...@gmail.com] Sent: Wednesday, June 22, 2011 11:51 AM To: mapreduce-user

RE: Tasktracker denied communication with jobtracker

2011-06-21 Thread Devaraj K
for the exclude list. mapred.hosts and mapred.hosts.exclude are for hadoop 0.20.x versions. For the later versions need to update these mapreduce.jobtracker.hosts.exclude.filename and mapreduce.jobtracker.hosts.filename. Devaraj K _ From: Virajith Jalaparti [mailto:virajit...@gmail.com