Hi Kasi,
I think MapR mailing list is the better place to ask this question.
Thanks
Devaraj k
From: Kasi Subrahmanyam [mailto:kasisubbu...@gmail.com]
Sent: 04 July 2013 08:49
To: common-u...@hadoop.apache.org; mapreduce-user@hadoop.apache.org
Subject: Output Directory not getting created
Hi
as argument to this.
Thanks
Devaraj K
From: Pedro Sá da Costa [mailto:psdc1...@gmail.com]
Sent: 21 June 2013 22:24
To: mapreduce-user
Subject: launch aggregatewordcount and sudoku in Yarn
How I run an aggregatewordcount and sudoku in Yarn? Do I need any input files,
more exactly in Sudoku
It doesn't accept multiple folders as input. You can have multiple files in a
directory and same you can give as input.
Thanks
Devaraj K
From: Pedro Sá da Costa [mailto:psdc1...@gmail.com]
Sent: 22 June 2013 16:25
To: mapreduce-user
Subject: How run Aggregator wordcount?
Aggregator wordcount
Devaraj K
From: Pedro Sá da Costa [mailto:psdc1...@gmail.com]
Sent: 18 June 2013 12:35
To: mapreduce-user
Subject: I just want the last 4 jobs in the job history in Yarn?
Is it possible to say that I just want the last 4 jobs in the job history in
Yarn?
--
Best regards,
Hi,
You can get all the details for Job using this mapred command
mapred job status Job-ID
For this you need to have Job History Server Running and the same job
history server address configured in the client side.
Thanks Regards
Devaraj K
From: Pedro Sá da Costa
!
But what I really want to know is how can I distribute one map data to every
reduce task, not one of reduce tasks.
Do you have some ideas?
发件人: Devaraj k [mailto:devara...@huawei.com]
发送时间: 2012年7月5日 12:12
收件人: mapreduce-user@hadoop.apache.org
主题: RE: How To Distribute One Map Data To All Reduce Tasks
Can you check all the hadoop environment variables are set properly in which
the app master is getting launching.
If you are submitting from windows, this might be the issue
https://issues.apache.org/jira/browse/MAPREDUCE-4052.
Thanks
Devaraj
From:
Is it expected to have these variables in profile file of the Linux user??
I am not using Windows client. My client is running on Mac and the cluster is
running on Linux versions.
Cheers,
Subroto Sanyal
On Jun 5, 2012, at 10:50 AM, Devaraj k wrote:
Can you check all the hadoop environment
:07 PM, Devaraj k wrote:
Hi Subroto,
It will not use yarn-env.sh for launching the application master. NM uses
the environment set by the client for launching application master. Can you
set the environment variables in /etc/profile or update the yarn application
classpath
What is the local directory you are using to store the data?
Thanks
Devaraj
From: hadoop anis [hadoop.a...@gmail.com]
Sent: Tuesday, May 29, 2012 12:29 PM
To: mapreduce-user@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Subject: Re: cleanup of data
[hadoop.a...@gmail.com]
Sent: Tuesday, May 29, 2012 4:00 PM
To: mapreduce-user@hadoop.apache.org
Subject: Re: cleanup of data when restarting Tasktracker of Hadoop
Thanks for Replying,
I am using shared directory to store the data
On 5/29/12, Devaraj k devara...@huawei.com wrote:
What
Hi Subbu,
I am not sure which input format you are using. If you are using
FileInputFormat, you can get the file name this way in map function..
import org.apache.hadoop.mapred.FileSplit;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.Mapper;
public
Hi Pedro,
1. If the Tast Tracker doesn't send heart beat for some time(i.e expiry
interval), the the task tracker will be declared as lost tracker and not black
listed task tracker. If many tasks are failing in the same Task Tracker for a
job, then the TT will be black listed for job, if it
Hi Qu,
You can access the HDFS read/write bytes for each task or job level using
the below counters.
FileSystemCounters : HDFS_BYTES_READ
FILE_BYTES_WRITTEN
These can be accessed by using UI or API.
Thanks
Devaraj
, Devaraj k devara...@huawei.com wrote:
Hi Arko,
What is value of 'no_of_reduce_tasks'?
If no of reduce tasks are 0, then the map task will directly write map output
into the Job output path.
Thanks
Devaraj
From: Arko Provo Mukherjee
Hi Arko,
What is value of 'no_of_reduce_tasks'?
If no of reduce tasks are 0, then the map task will directly write map output
into the Job output path.
Thanks
Devaraj
From: Arko Provo Mukherjee [arkoprovomukher...@gmail.com]
Sent: Tuesday, April
Hi Grzegorz,
You can find the below properties for Job input and output compression:
The below prop is used by the codec factory. This codec will be taken based on
the type(i.e suffix) of the file. By default the LineRecordReador which is used
by FileInputFormat uses this. If you want the
Message-
From: Devaraj k [mailto:devara...@huawei.commailto:devara...@huawei.com]
Sent: Wednesday, April 04, 2012 12:35 PM
To: mapreduce-user@hadoop.apache.orgmailto:mapreduce-user@hadoop.apache.org
Subject: RE: Including third party jar files in Map Reduce job
Hi Utkarsh,
The usage of the jar
Hi Joris,
You cannot configure the work directory directly. You can configure the local
directory with property 'mapred.local.dir', and it will be used further to
create the work directory like
'${mapred.local.dir}/taskTracker/jobcache/$jobid/$taskid/work'. Based on this,
you can relatively
Hi Zhoujie,
hadoop-yarn-common is failing to resolve hadoop-yarn-api jar file.
Can you try executing install(mvn install -X) on hadoop-yarn-api and then
continue with mvn eclipse:eclipse -DdownloadSources=true
-DdownloadJavadocs=true -e.
Devaraj K
_
From: 周杰 [mailto:zhoujie
can go through this link for more info on input format and out format.
http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapred/In
putFormat.html
http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/mapred/Ou
tputFormat.html
Devaraj K
Hi Bejoy,
It is possible to execute a job with no mappers and reducers alone.
You can try this by giving the empty directory as input for the job.
Devaraj K
_
From: Bejoy KS [mailto:bejoy.had...@gmail.com]
Sent: Wednesday, September 07, 2011 1:30 PM
To: mapreduce
by doing some changes in the script and configuration
files.
You can have a look into this, for what all changes need to do for starting
multiple data nodes in a single machine.
http://www.mail-archive.com/hdfs-user@hadoop.apache.org/msg01353.html
Devaraj K
_
From: 谭军 [mailto:tanjun_2
Can you check the logs in the task tracker machine, what is happening to the
task execution and status of the task?
Devaraj K
-
This e-mail and its attachments
With this info it is difficult to find out where the problem is coming. Can
you check the job tracker and task tracker logs related to these jobs?
Devaraj K
_
From: Sudharsan Sampath [mailto:sudha...@gmail.com]
Sent: Wednesday, June 22, 2011 11:51 AM
To: mapreduce-user
for the exclude list.
mapred.hosts and mapred.hosts.exclude are for hadoop 0.20.x versions.
For the later versions need to update these
mapreduce.jobtracker.hosts.exclude.filename and
mapreduce.jobtracker.hosts.filename.
Devaraj K
_
From: Virajith Jalaparti [mailto:virajit...@gmail.com
26 matches
Mail list logo