Why does suddenly ha switching?
My hadoop cluster HA active namenode(host1) suddenly switch to standby
namenode(host2).
I could not found any error in hadoop logs (in any server) to identify the root
cause.
Tthe Namenodes following error appeared in hdfs logs frequently and non of the
From the log, looks like the connections between NameNodes and ZK quorum
are not stable. And the ZK session is time-out. You can check the log of
the Zookeeper servers. You may find some errors about the connection
failure.
On Fri, Aug 1, 2014 at 2:06 PM, cho ju il tjst...@kgrid.co.kr wrote:
I will run through the procedure again tomorrow. It was late in the day
before I had a chance to test the procedure.
If I recall correctly I had an issue formatting the New standby, before
bootstrapping. I think either at that point, or during the Zookeeper
format command, I was queried to
Look at the log time.
The namenode is already switching.
ZKFC log Is written after the namenode is swiched.
Timeline...
1. Something? What happened? The log does not record.
2. 2014-08-01 04:21:03,608 HA switching
3. 2014-08-01 04:21:03,601 Zookeeper session timeout
4. 2014-08-01
Hi,
I¹d really appreciate it if someone could let me know the current
preferred specification for a cluster set up.
On average how many nodes
Disk space
Memory
Switch size
A link to a paper or discussion would be much appreciated.
Thanks in advance
Regards,
Chris MacKenzie
telephone: 0131
Hello,
I'm currently using HDP 2.0 so it's Hadoop 2.2.0.
My cluster consist in 4 nodes, 16 coeurs 16 GB RAM 4*3To each.
Recently we passed from 2 users to 8. We need now a more appropriate
Scheduler.
We begin with Capacity Scheduler. There was some issues with the different
queues particularly
Thanks a ton for ur help Harsh . I am a newbie in hadoop.
If i have set
mapred.tasktracker.map.tasks.maximum = 4
mapred.tasktracker.reduce.tasks.maximum = 4
Should i also bother or set below values
mapred.map.Tasks and mapred.reduce.Tasks .
If yes then what is the ideal value?
On Fri, Aug
the setting mapred.tasktracker.* related settings are related to maximum
number of maps or reducers a tasktracker can run. This can change across
machines if you have multiple nodes then depending on machine config you
can decide these values. If you set it to 4, it will basically mean that at
You should first replace the namenode, then when that is completely
finished move on to replacing any journal nodes. That part is easy:
1) bootstrap new JN (rsync from an existing)
2) Start new JN
3) push hdfs-site.xml to both namenodes
4) restart standby namenode
5) verify logs and admin ui show
Also you shouldn't format the new standby. You only format a namenode for a
brand new cluster. Once a cluster is live you should just use the bootstrap
on the new namenodes and never format again. Bootstrap is basically a
special format that just creates the dirs and copies an active fsimage to
I realize that this was a foolish error made late in the day. I am no
hadoop expert, and have much to learn. This is why I setup a test
environment.
On Aug 1, 2014 6:47 AM, Bryan Beaudreault bbeaudrea...@hubspot.com
wrote:
Also you shouldn't format the new standby. You only format a namenode
No worries! Glad you had a test environment to play with this in. Also,
above I meant If bootstrap fails..., not format of course :)
On Fri, Aug 1, 2014 at 10:24 AM, Colin Kincaid Williams disc...@uw.edu
wrote:
I realize that this was a foolish error made late in the day. I am no
hadoop
The test environment is a 6 node virtualbox cluster run on 2 desktops :] 7
with the extra namenode.
On Fri, Aug 1, 2014 at 7:26 AM, Bryan Beaudreault bbeaudrea...@hubspot.com
wrote:
No worries! Glad you had a test environment to play with this in. Also,
above I meant If bootstrap fails...,
Hi All,
Is it possible to do a rolling upgrade from Hadoop 2.2 to 2.4?
Thanks,
Pradeep
The book Hadoop Operations by Eric Sammer helped answer a lot of these
questions for me.
Adaryl Bob Wakefield, MBA
Principal
Mass Street Analytics
913.938.6685
www.linkedin.com/in/bobwakefieldmba
-Original Message-
From: Chris MacKenzie
Sent: Friday, August 01, 2014 4:35 AM
To:
HDFS Rolling Upgrade supports from 2.4 to 2.4+, so it's not possible.
Regards,
Akira
(2014/08/02 1:05), Pradeep Gollakota wrote:
Hi All,
Is it possible to do a rolling upgrade from Hadoop 2.2 to 2.4?
Thanks,
Pradeep
I found what was causing trouble (which it looks like others have seen as
well):
Ubuntu (and maybe other distros?) has 127.0.1.1 as a loopback address to
the node's hostname in /etc/hosts. Removing this line resolved some
connection issues my nodes were having.
~Houston King
On Thu, Jul 31,
Hi,
I am wondering how to remote debugging Yarn's RM using eclipse. I tried to
adding the debugging options -Xdebug
-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=1337 to YARN_OPTS
but it did not work. Any suggestions ?
Thanks
18 matches
Mail list logo