My guess is to put two set of this
dfs.ha.namenodes.clusterA=nn1,nn2
dfs.namenode.rpc-address.clusterA.nn1=
dfs.namenode.http-address.clusterA.nn1=
dfs.namenode.rpc-address.clusterA.nn2=
dfs.namenode.http-address.clusterA.nn2=
to the client setting, and then access it like hdfs://clusterA/tmp ...
Thanks Zhijie !
I had few more questions :
1. I played around with the timeline server ui today which showed the
generic application history details,
but I couldn't find any page for application specific data. Is the
expectation that every application
needs to build their own UI using the exposed
Hi everyone,
I have subscribed hadoop mail list this morning. How do I get started with
hadoop on my windows 7 PC.
Thanks!
Hi,
I'm new in hadoop, can I get some useful links about hadoop so I can get
started with it step by step.
Thank you very much!
NoRouteToHost, please check your network setting
Regards,
*Stanley Shi,*
On Fri, Apr 18, 2014 at 3:42 PM, td...@126.com wrote:
Hi,
No errors in hdfsConnect().
But if I call hdfsCreateDirectory() after hdfsConnect() , got errors as
followed:
Hi, Stanley Shi
What I want to say is hdfsConnect should return NULL or error, but it doesn’t.
Not the same as the declaration in hdfs.h.
Thanks.
发件人: user-return-15164-tdhkx=126@hadoop.apache.org
[mailto:user-return-15164-tdhkx=126@hadoop.apache.org] 代表 Stanley Shi
发送时间:
Without looking at the code its hard to say. Perhaps looking at a working
code will put you in the right direction.
For example, here is the DistributedShell from Hadoop (only few classes)
Assuming, you are talking about basic stuff...
Michael Noll has some good Hadoop (pre-Yarn) tutorials
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
Then definitely go through the book Hadoop- The Definitive Guide by Tom
White.
Hi, can anyone help me with this?
From: John Lilley [mailto:john.lil...@redpoint.net]
Sent: Sunday, April 20, 2014 3:40 PM
To: user@hadoop.apache.org
Subject: HDFS and YARN security and interface impacts
We have an application that interfaces directly to HDFS and YARN (no
MapReduce). It does
My version is 2.1.0 and the cluster uses DefaultContainerExecutor. Is it
possible that DefaultContainerExecutor change the permission of existing
nodemanager log-dir to 755?
2014-04-25 0:54 GMT+08:00 Vinod Kumar Vavilapalli vino...@apache.org:
Which version of Hadoop are you using? This part
Assume I have a machine on the same network as a hadoop 2 cluster but
separate from it.
My understanding is that by setting certain elements of the config file or
local xml files to point to the cluster I can launch a job without having
to log into the cluster, move my jar to hdfs and start the
What version of Hadoop you are using? (YARN or no YARN)
To answer your question; Yes its possible and simple. All you need to to is
to have Hadoop JARs on the classpath with relevant configuration files on
the same classpath pointing to the Hadoop cluster. Most often people
simply copy
Thank you for your answer
1) I am using YARN
2) So presumably dropping core-site.xml, yarn-site into user.dir works do
I need mapred-site.xml as well?
On Fri, Apr 25, 2014 at 9:00 AM, Oleg Zhurakousky
oleg.zhurakou...@gmail.com wrote:
What version of Hadoop you are using? (YARN or no YARN)
Yes, if you are running MR
On Fri, Apr 25, 2014 at 12:48 PM, Steve Lewis lordjoe2...@gmail.com wrote:
Thank you for your answer
1) I am using YARN
2) So presumably dropping core-site.xml, yarn-site into user.dir works
do I need mapred-site.xml as well?
On Fri, Apr 25, 2014 at 9:00 AM,
so if I create a Hadoop jar file with referenced libraries in the lib
directory do I need to move it to hdfs or can it sit on my local machine?
if I move it to hdfs where does it live - which is to say how do I specify
the path?
On Fri, Apr 25, 2014 at 9:52 AM, Oleg Zhurakousky
I am using MR and know the job.setJar command - I can add all dependencies
to the jar in the lib directory but I was wondering if Hadoop would copy a
jar from my local machine to the cluster - also is I ran multiple jobs with
the same jar whether the jar would be copied N times (I typically chain
Yes, it will be copied since it goes to each job's namesapce
On Fri, Apr 25, 2014 at 1:14 PM, Steve Lewis lordjoe2...@gmail.com wrote:
I am using MR and know the job.setJar command - I can add all dependencies
to the jar in the lib directory but I was wondering if Hadoop would copy a
jar
1 and 4:
We have thought about that in addition to service application specific
data, the timeline server should accept the web UI plugin from the
application, install it and render the data on the web page according the
application's design, but still need to figure out the plan. Before that,
the
18 matches
Mail list logo