I think this is tricky in the sense if application returns with same exit
code then I am not sure if we differentiate between the two. But can be
used as defaults.
On Thu, Dec 26, 2013 at 11:24 AM, Jian He wrote:
> Not true, you can look at ContainerExitStatus.java which includes all the
> poss
if it is not present in CLI then writing a simple wrapper around yarn -web
services will be a option. I think all your listed requirements will be
covered with web services though
On Thu, Dec 26, 2013 at 11:28 AM, Jian He wrote:
> 1) checking how many nodes are in my cluster
> 3) what are the n
well i couldn't find any property in
http://hadoop.apache.org/docs/r1.2.1/hdfs-default.html that sets the time
interval time consider a node as dead.
I saw there is a property dfs.namenode.heartbeat.recheck-interval or
heartbeat.recheck.interval, but i couldn't find it there. is it removed?
or am
Maybe I'm just grouchy tonight.. it's seems all of these questions can be
answered by RTFM.
http://hadoop.apache.org/docs/current2/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html
What's the balance between encouraging learning by New to Hadoop users and
OMG!?
On Fri, Dec 27, 2013 at 8:58 PM,
Hi all,
Can someone tell me these:
1) which property in hadoop conf sets the time limit to consider a node as
dead?
2) after detecting a node as dead, after how much time does hadoop
replicates the block to another node?
3) if the dead node comes alive again, in how much time does hadoop
identifi
Hi,
It depends on which prop you are changing. To be safe, you need change it in
all nodes.
发自我的 iPhone
> 在 2013年12月28日,7:33,Pushparaj Motamari 写道:
>
> Hi,
>
> I am working on a Hadoop cluster with 5 datanodes, and 1(NameNode+Job
> Tracker) . If I would like to add a property to mapred-si
Hi,
I am working on a Hadoop cluster with 5 datanodes, and 1(NameNode+Job
Tracker) . If I would like to add a property to mapred-site.xml, then I
need to do that on all datanodes or doing it on JobTracker suffice?
I am using hadoop-0.20.203.0rc1 version of hadoop
Thanks
I recently blogged about it - hope it helps
http://letsdobigdata.wordpress.com/2013/12/07/running-hadoop-mapreduce-application-from-eclipse-kepler/
Regards,
Hardik
On Fri, Dec 27, 2013 at 6:53 AM, Sitaraman Vilayannur <
vrsitaramanietfli...@gmail.com> wrote:
> Hi,
> Would much appreciate a poi
Hi,
I suggest to read the chapter 2 of the book "Hadoop: the definitive guide".
In that chapter mapReduce is explained with a nice example, also there are
lines of code for map and reduce functions. Every phase of MapReduce is
explained in detail and there are examples of how mapReduce can be run
You can use maven to compile and package Hadoop and deploy it to one
cluster, then run it with script supplied by Hadoop.
And this tutorial for your reference
http://svn.apache.org/repos/asf/hadoop/common/trunk/BUILDING.txt
2013/12/25 Karim Awara
> Hi,
>
> I managed to build hadoop 2.2 from sou
Will get hold of it, thanks for the pointer yanbo.
Sitaraman
On Fri, Dec 27, 2013 at 8:24 PM, Yanbo Liang wrote:
> May be you can reference <>
>
>
> 2013/12/27 Sitaraman Vilayannur
>
>> Hi,
>> Would much appreciate a pointer to a mapreduce tutorial which explains
>> how i can run a simulated
May be you can reference <>
2013/12/27 Sitaraman Vilayannur
> Hi,
> Would much appreciate a pointer to a mapreduce tutorial which explains
> how i can run a simulated cluster of mapreduce nodes on a single PC and
> write a Java program with the MapReduce Paradigm.
> Thanks very much.
> Sitara
Did you installed Hive on your Hadoop cluster?
If yes, use Hive SQL may be simple and efficiency.
Otherwise, you can write a MapReduce program with
org.apache.hadoop.mapred.lib.MultiOuputFormat, and the output from the
Reducer can be written to more than one file.
2013/12/27 Nitin Pawar
> 1)if
1)if you have a csv file and do it often without writing a lot of code
then create a hive table with "," delimiter and then select from table
columns you want and write to the file
2) you are good at script, then look at pig scripting, and then write to
files
3) you want to do it through mapreduc
Hi,
I have a file with 16 fields such as
id,name,sa,dept,exp,address,company,phone,mobile,project,redk, so on
My scenaraio is to split the first eight attributes in one file and another
eight attributes in another file using MapReduce program.
so first eight attributes and its value in o
Hi,
Would much appreciate a pointer to a mapreduce tutorial which explains how
i can run a simulated cluster of mapreduce nodes on a single PC and write a
Java program with the MapReduce Paradigm.
Thanks very much.
Sitaraman
The following site has a resolution, thanks.
http://raseshmori.wordpress.com/2012/09/23/install-hadoop-2-0-1-yarn-nextgen/
Sitaraman
On Fri, Dec 27, 2013 at 2:46 PM, Sitaraman Vilayannur <
vrsitaramanietfli...@gmail.com> wrote:
> Hi when i try to start nodemanager with
>
> ./yarn-daemon.sh start
Hi when i try to start nodemanager with
./yarn-daemon.sh start nodemanager
i am getting the following error appended below in the logs and nodemanger
is not starting. This is in version 2.2
Any help is appreciated on how to resolve this error. Thanks in
anticipation.
Sitaraman
./yarn-daemon.sh s
You can find subscribe mail Ids on this page:
http://hadoop.apache.org/mailing_lists.html
On Dec 24, 2013, at 6:55 AM, 李立伟 wrote:
> · Subscribe to List
What specific info are you looking for?
On Monday, December 23, 2013, Manoj Babu wrote:
> Hi All,
>
> Can anybody share their experience on Rewriting Ab-Initio scripts using
> Hadoop MapReduce?
>
>
> Cheers!
> Manoj.
>
20 matches
Mail list logo