Some reasons lead to my name node data error, but the error data also
overwrite the second name node data, also the NFS backup. I want to recover
the name node data a day ago or even a week ago,but I can't. I have to back
up name node data manually or write a bash script to backup it? why hadoop
Hello Andy,
NN stores all the metadata in a file called as fsimage. The
fsimage file contains a snapshot of the HDFS metadata. Along with fsimage
NN also holds edit log files. Whenever there is a change to HDFS, it
gets appended to the edits file. When these log files grow big, they
I am sorry Andy, I forgot one important point.
The Secondary NameNode has been deprecated now, so consider using the
Checkpoint Node or Backup Node. Checkpoint Node is the process which is
actually responsible for creating periodic check points. It downloads
fsimage and log edits from the active
O...This is the benefit of sharing space with you. Thank you so
much for
keeping my knowledge base updated. It' high time, I require a proper re-scan
of everything.
@Andy : Now i'm truly sorry, for passing on the wrong info.
Best Regards,
Tariq
+91-9741563634
https://mtariq.jux.com/
On
Hello group,
What could be the approx. size of the metadata if I have 1 TB of
data in my HDFS?I am not doing anything additional but just a simple put.
Will it be ((1*1024*1024)/64)*200 Bytes?
*Keeping 64M as the block size.
Is my understanding right?Please correct me if i'm wrong.
Many
Robert J Berger rberger@... writes:
Just want to follow up to first thank QwertyM aka Harsh Chouraria for helping
me out on the IRC channel. Well
beyond the call of duty! Its people like Harsh that make the HBase/Hadoop
community what it is and one of the
joys of working with this
Having situation here. Some of our servers went away for a while. As
we attached them back to the cluster it appeared that as a result we have
multiple Missing/Corrupt blocks and some Mis-replicated blocks.
Can't figure out how to solve the issue of restoring the system to a
normal working state.
Thank you so much for the valuable response Stephen. But I have a few
questions to ask here. Could you please elaborate a bit, if possible?
Each of the specified objects are totally different from each other. A file
will be smaller than a directory in size, and a directory might be smaller
than a
While Ted ignores that the world is going to end before X-Mas, he does hit the
crux of the matter head on.
If you don't have a place to put it, the cost of setting it up would kill you,
not to mention that you can get newer hardware which is better suited for less.
Having said that... if you
On Thu, Dec 20, 2012 at 7:38 AM, Michael Segel michael_se...@hotmail.comwrote:
While Ted ignores that the world is going to end before X-Mas, he does hit
the crux of the matter head on.
If you don't have a place to put it, the cost of setting it up would kill
you, not to mention that you can
Hi All,
I´m going to test a hadoop cluster and I have a doubt about HA and
Federation.
With federation I Have a NameNode per namespace and with HA I have an
Active NameNode and a standby NameNode.
so, as I have sevaral namespaces, do I need an Active NameNode and a
standby nameNode per
Thanks harsh,
It make sense now,
The problem was, I was trying to use Hadoop-yarn artifact and I maven couldn't
find the artifact.
Thank you.
On Dec 19, 2012, at 10:01 PM, Harsh J ha...@cloudera.com wrote:
Hi Anil,
Usage oriented questions should be directed at user@hadoop.apache.org.
Tadas,
One time I remember disconnecting a bunch of DNodes from my dev cluster i/o of
required, more elegant exclude.
The next thing I learned was my FS was corrupted. I did not care about my data
( I could re-import it again) but my NN metadata was messed up, so what worked
for me was to
Hi Harsh,
First thank you very much for your answer,
following your example:
You have:
1 Active NameNode + 1 Passive NameNode (it does the work of the old
Secondary NameNode) for NS1 NameSpace (these are 2 diferent machines)
1 NameNode for NS2
1 NameNode for NS3
but what about the Secondary
Hi,
To put it simply: If you use a NameNode, you need a SecondaryNameNode.
In HA-mode, a StandbyNameNode acts as a SecondaryNameNode (so you
don't need to run an extra).
Either way, you definitely need the checkpoint operation happening and
being monitored for.
On Thu, Dec 20, 2012 at 11:09 PM,
Hi again,
So finally the number of nodes are these:
1 Active NameNode + 1 Passive NameNode (it does the work of the
old Secondary NameNode) for NS1 NameSpace (these are 2 diferent machines)
1 NameNode for NS2 + 1 Secondary NameNode
1 NameNode for NS3 + 1 Secondary NameNode
We can say that we
Yes I think its safe to say that - sorry that I missed out SNNs in my
first response (I counted only the regular serving namenodes) :)
On Thu, Dec 20, 2012 at 11:25 PM, ESGLinux esggru...@gmail.com wrote:
Hi again,
So finally the number of nodes are these:
1 Active NameNode + 1 Passive
Thank you very much,
your answer have clarified me these concepts very much,
I didn't understand how could I mix HA and Federation and how many nodes I
need
Kind Regards,
ESGLinux,
2012/12/20 Harsh J ha...@cloudera.com
Yes I think its safe to say that - sorry that I missed out SNNs in
Btw, you can co-locate NameNodes (unique namespace ones) onto the same
machine if you need to - the configs easily allow this via rpc/http
port specifiers.
On Thu, Dec 20, 2012 at 11:33 PM, ESGLinux esggru...@gmail.com wrote:
Thank you very much,
your answer have clarified me these concepts
Hi Jon,
FYI, this issue in the fair scheduler was fixed by
https://issues.apache.org/jira/browse/MAPREDUCE-2905 for 1.1.0.
Though it is present again in MR2:
https://issues.apache.org/jira/browse/MAPREDUCE-3268
-Todd
On Wed, Nov 28, 2012 at 2:32 PM, Jon Allen jayaye...@gmail.com wrote:
Jie,
+1 the way jon elaborated it.
On Fri, Dec 21, 2012 at 6:36 AM, Todd Lipcon t...@cloudera.com wrote:
Hi Jon,
FYI, this issue in the fair scheduler was fixed by
https://issues.apache.org/jira/browse/MAPREDUCE-2905 for 1.1.0.
Though it is present again in MR2:
I had two servers at home, that's like a small airplane.
However, I have seen silent servers in a lab I am working in.
My price on that rack might be close to $0.
Then I could host a rack for $500/month. And if a machine breaks, I will
throw it away. So that makes for $ maintenance cost.
You
It might be created in different location under cygwin
On Dec 20, 2012, at 9:48 PM, Ramachandran Vilayannur
vparameswaranramachand...@gmail.com wrote:
Hi
The command,
bin/hadoop jar -v hadoop-examples-0.20.203.0.jar grep input output 'dfs[a-z]+'
returns without error in Cygwin. However the
Hi Serge,
Thanks for the responsedid a find . -name output on hadoop home
directory and cygwin directoy and also tried giving ./ouput in the
commandyet cant find it...
Sitaraman
On Fri, Dec 21, 2012 at 11:28 AM, Serge Blazhiyevskyy
serge.blazhiyevs...@nice.com wrote:
It might be
Hi,
Finally I´m going to try this:
1 Machine: Active Name Node for NS1
1 Machine: Passive Name Node for NS1
1 Machine: NameNode for NS2 + NameNode for NS3
1 Machine: Secondary NameNode for NS2 + Secondary NameNode for NS3
Is this correct?
thanks,
ESGLinux
2012/12/20 Harsh J
25 matches
Mail list logo