Hi Abhishek,
Easy on the caps mate. Can you pastebin.com-paste your NM logs and RM logs?
On Sat, Jul 28, 2012 at 8:45 AM, abhiTowson cal
wrote:
> HI all,
>
> Iam trying to start nodemanager but it does not start. i have
> installed CDH4 AND YARN
>
> all datanodes are running
>
> Resource manager
I think its alright if we may fail the app if it requests what is
impossible, rather than log or wait for an admin to come along and fix
it in runtime. Please do file a JIRA.
The max allocation value can perhaps also be dynamically set to the
maximum offered RAM value across the NMs that are live,
Hi Harsh,
Thanks a lot for your response. I am going to try your suggestions and let
you know the outcome.
I am running the cluster on VMWare hypervisor. I have 3 physical machines
with 16GB of RAM, and 4TB( 2 HD of 2TB each). On every machine i am running
4 VM's. Each VM is having 3.2 GB of memor
Sean,
Most of the times I've found this to be related to two issues:
1. NN and JT have been bound to to localhost or a 127.0.0.1-resolving
hostname, leading to the other nodes never being able to connect to
its ports, as it never listens over the true network interface.
2. Failure in turning off,
Hi,
The 'root' doesn't matter. You may run jobs as any username on an
unsecured cluster, should be just the same.
The config yarn.nodemanager.resource.memory-mb = 1200 is your issue.
By default, the tasks will execute with a resource demand of 1 GB, and
the AM itself demands, by default, 1.5 GB t
Hi Bertrand,
I believe he is talking about MapFile's index files, explained here:
http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/MapFile.html
On Fri, Jul 27, 2012 at 11:24 AM, Bertrand Dechoux wrote:
> Your use of 'index' is indeed not clear. Are you talking about Hive or
Hey Mike,
Inline.
On Tue, Jul 24, 2012 at 1:39 AM, Mike S wrote:
> If I set my reducer output to map file output format and the job would
> say have 100 reducers, will the output generate 100 different index
> file (one for each reducer) or one index file for all the reducers
> (basically one in
Hi Harsh,
I have set the *yarn.nodemanager.resource.memory-mb *to 1200 mb. Also, does
it matters if i run the jobs as "root" while the RM service and NM service
are running as "yarn" user? However, i have created the /user/root
directory for root user in hdfs.
Here is the yarn-site.xml:
y
Can you share your yarn-site.xml contents? Have you tweaked memory
sizes in there?
On Fri, Jul 27, 2012 at 11:53 PM, anil gupta wrote:
> Hi All,
>
> I have a Hadoop 2.0 alpha(cdh4) hadoop/hbase cluster runnning on
> CentOS6.0. The cluster has 4 admin nodes and 8 data nodes. I have the RM
> and H
Hello,
Do any management or scripts exist that I can use to dynamically comission
and decomission hadoop nodes on Amazon EC2?
I've written some scripts to spin up nodes, and I have a clear path to
bring them back down safely while minimizing data loss (thanks to
dfs.hosts.exclude). However, if a
I got it. The hadoop installation had been done by root (I can't claim credit
for that thankfully), and when I chowned everything to my account, I missed a
few directories. Filling in those blanks made it start working.
On Jul 27, 2012, at 11:30 , anil gupta wrote:
> Hi Keith,
>
> Does ping
Hi Keith
Your NameNode is not up still. What does the NN logs say?
Regards
Bejoy KS
Sent from handheld, please excuse typos.
-Original Message-
From: anil gupta
Date: Fri, 27 Jul 2012 11:30:57
To:
Reply-To: common-user@hadoop.apache.org
Subject: Re: Retrying connect to server: localh
Hi Keith,
Does ping to localhost returns a reply? Try telneting to localhost 9000.
Thanks,
Anil
On Fri, Jul 27, 2012 at 11:22 AM, Keith Wiley wrote:
> I'm plagued with this error:
> Retrying connect to server: localhost/127.0.0.1:9000.
>
> I'm trying to set up hadoop on a new machine, just a b
I'm plagued with this error:
Retrying connect to server: localhost/127.0.0.1:9000.
I'm trying to set up hadoop on a new machine, just a basic pseudo-distributed
setup. I've done this quite a few times on other machines, but this time I'm
kinda stuck. I formatted the namenode without obvious er
Hi abhay ,
As alok mentioned that's a perfect choice to override on runtime . Make
sure that properties should not be set as final in configuration file .
Regards
Syed
On Jul 26, 2012 12:16 AM, "Alok Kumar [via Lucene]" <
ml-node+s472066n3997300...@n3.nabble.com> wrote:
> Hi Abhay,
>
> On Wed, Ju
Yes you are right. If it is true by default we probably want to update
the documentation for the web services to indicate this. Could you file a
JIRA for improving that documentation?
Thanks,
Bobby
On 7/27/12 3:11 AM, "Prajakta Kalmegh" wrote:
>:) Yes, you are right. The yarn.acl.enable prop
Hi,
Apologies if this is a repeat. I don't know if my earlier posting got out due
to the timing of my subscription being accepted.
I'm a new subscriber to this list with some questions about Hadoop JMX metrics.
We are running version 0.20.2.
Hadoop-metrics.properties is configured for GangliaCo
thank you very much ,i done it, because in the 1.0.3-release ,the hadoop
need to set HADOOP_SECURE_DN_USER param ,then it can
start datanode in security mode。
2012/7/26 T. A. Smooth
> I ?think? it had something to do with the jsvc Binary. It wasn't compiled
> properly.
>
> But I'm not 100%
HI Abhinav
MapFileOutputFormat is currently not available for the new mapreduce
API in hadoop 1.x . However a jira is in place to accommodate it in
the future releases.
https://issues.apache.org/jira/browse/MAPREDUCE-4158
Regards
Bejoy KS
I had written a script to setup a single node setup
this is just a dummy script to get things in working state on a single node
just download the files and run the script
https://github.com/nitinpawar/hadoop/
On Fri, Jul 27, 2012 at 5:46 PM, Bejoy Ks wrote:
> Hi Dinesh
>
> Try using $HADOOP_HOME
Hi Dinesh
Try using $HADOOP_HOME/bin/start-all.sh . It starts all the hadoop
daemons including TT and DN.
Regards
Bejoy KS
On 07/26/2012 09:20 PM, Steve Armstrong wrote:
Do you mean I need to deploy the mahout jars to the lib directory of
the master node? Or all the data nodes? Or is there a way to simply
tell the hadoop job launcher to upload the jars itself?
Every node that runs a Task (mapper or reducer) needs a
Hi Dinesh Joshi,
Can you please paste your xml(core site,hdfs-site,mapred-site) and did
you found any Error in your log dir ..
Thanks and Regards,
S SYED ABDUL KATHER
On Fri, Jul 27, 2012 at 2:54 PM, Dinesh Joshi [via Lucene] <
ml-node+s472066n3997685...@n3.nabble.com> wro
Hi Steve
But if I try and run it from my dev pc in Eclipse (where all the
same dependencies are still in the classpath), and add the 3 hadoop
xml files to the classpath, it triggers hadoop jobs, but they fail
with error
There is problem in eclipse build path . I had faced same problem when i am
t
Hi all,
I installed Hadoop 1.0.3 and am running it as a single node cluster. I
noticed that start-daemon.sh only starts Namenode, Secondary Namenode
and the JobTracker daemon.
Datanode and Tasktracker daemons are not started. However, when I
start them individually they start up without any issue
:) Yes, you are right. The yarn.acl.enable property in yarn-default.xml is
set true. If the property is true by default, then this makes it mandatory
for users to either specify a value for hadoop.http.staticuser.user
property explicitly or to change the acl's to false. Am I right to assume
this?
26 matches
Mail list logo