AFAIK backup node introduced in 0.21 version onwards.
From: praveenesh kumar [praveen...@gmail.com]
Sent: Wednesday, December 07, 2011 12:40 PM
To: common-user@hadoop.apache.org
Subject: HDFS Backup nodes
Does hadoop 0.20.205 supports configuring HDFS
This means still we are relying on Secondary NameNode idealogy for
Namenode's backup.
Can OS-mirroring of Namenode is a good alternative keep it alive all the
time ?
Thanks,
Praveenesh
On Wed, Dec 7, 2011 at 1:35 PM, Uma Maheswara Rao G mahesw...@huawei.comwrote:
AFAIK backup node introduced
Hi,
I am trying to implement the following use case, is that possible in
hadoop:
I would have to use Hive or Hbase?
Data comes in at hourly intervals, 15 min, 10 min and 5 min interval
Goal are :
1. Compare incoming data with the data stored in existing system
2. Identifying incremental changes
Yes ... it you are looking for high uptime then keeping the Namenode OS-mirror
always running would be the best way to go.
We might need to explore further on the capabilities of HDFS backup node to see
how it can be utilized.
Thanks,
Sagar
-Original Message-
From: praveenesh kumar
Hi Shreya,
We had a similar question and some discussions, this mailing thread may help
you.
http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201112.mbox/%3cCALH6cCNSXQye8F6geJiUDu+30Q4==EOd1pmU+rzpj50_evC5=w...@mail.gmail.com%3e
Regards,
Ravi Teja
Hi,
Can someone please send me the Hadoop comic.
Saw references about it in the mailing list.
Regards,
Shreya
This e-mail and any files transmitted with it are for the sole use of the
intended recipient(s) and may contain confidential and privileged information.
If you are not the
Hi,
https://docs.google.com/viewer?a=vpid=explorerchrome=truesrcid=0B-zw6KHOtbT4MmRkZWJjYzEtYjI3Ni00NTFjLWE0OGItYTU5OGMxYjc0N2M1hl=en_US
- alex
On Wed, Dec 7, 2011 at 10:47 AM, shreya@cognizant.com wrote:
Hi,
Can someone please send me the Hadoop comic.
Saw references about it in
Here you go
https://docs.google.com/viewer?a=vpid=explorerchrome=truesrcid=0B-zw6KHOtbT4MmRkZWJjYzEtYjI3Ni00NTFjLWE0OGItYTU5OGMxYjc0N2M1hl=en_USpli=1
Thanks,
Prashant
On Wed, Dec 7, 2011 at 1:47 AM, shreya@cognizant.com wrote:
Hi,
Can someone please send me the Hadoop comic.
Saw
Thanks guys for the link
Its really nice ...
-Original Message-
From: alo alt [mailto:wget.n...@googlemail.com]
Sent: Wednesday, December 07, 2011 3:20 PM
To: common-user@hadoop.apache.org
Subject: Re: Hadoop Comic
Hi,
How to avoid Warning: $HADOOP_HOME is deprecated messages on hadoop
0.20.205 ?
I tried adding *export HADOOP_HOME_WARN_SUPPRESS= *in hadoop-env.sh on
Namenode.
But its still coming. Am I doing the right thing ?
Thanks,
Praveenesh
Hi,
looks like a bug in .205:
https://issues.apache.org/jira/browse/HADOOP-7816
- Alex
On Wed, Dec 7, 2011 at 11:37 AM, praveenesh kumar praveen...@gmail.comwrote:
How to avoid Warning: $HADOOP_HOME is deprecated messages on hadoop
0.20.205 ?
I tried adding *export
Okay, I fixed it.
I have to add *export HADOOP_HOME_WARN_SUPPRESS=TRUE* in hadoop-env.sh
on all my hadoop nodes.
Thanks,
Praveenesh
On Wed, Dec 7, 2011 at 4:11 PM, alo alt wget.n...@googlemail.com wrote:
Hi,
looks like a bug in .205:
https://issues.apache.org/jira/browse/HADOOP-7816
-
You should also configure the Namenode to use an NFS mount for one of
it's storage directories. That will give the most up-to-date back of
the metadata in case of total node failure.
-Joey
On Wed, Dec 7, 2011 at 3:17 AM, praveenesh kumar praveen...@gmail.com wrote:
This means still we are
Just to add to that note - we've ran into an issue where the NFS share
was out of sync (the namenode storage failed even though the NFS share
was working), but the other local metadata was fine. At the restart of
the namenode it picked the NFS share's fsimage even if it was out of
sync. This had
What happens then if the nfs server fails or isn't reachable? Does hdfs lock
up? Does it gracefully ignore the nfs copy?
Thanks,
randy
- Original Message -
From: Joey Echeverria j...@cloudera.com
To: common-user@hadoop.apache.org
Sent: Wednesday, December 7, 2011 6:07:58 AM
Subject: Re:
On Mon, Dec 5, 2011 at 2:32 AM, praveenesh kumar praveen...@gmail.com wrote:
Hi all,
Can anyone guide me how to automate the hadoop installation/configuration
process?
We are rapidly making progress on Ambari. Ambari is an Apache project
that will deploy, configure, and administer Hadoop
All
I am encountering the following out-of-memory error during the reduce phase of
a large job.
Map output copy failure : java.lang.OutOfMemoryError: Java heap space
at
org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.shuffleInMemory(ReduceTask.java:1669)
at
Hi Jaganadh
I am the author of this comic strip. Please feel free to re-distribute it
as you see fit.. I assign the content under Creative Commons Attribution
Share-Alike.
Would also like to thank everybody for the nice feedback and encouragement!
This was a little experiment on my part to see
Hey Rand,
It will mark that storage directory as failed and ignore it from then
on. In order to do this correctly, you need a couple of options
enabled on the NFS mount to make sure that it doesn't retry
infinitely. I usually run with the tcp,soft,intr,timeo=10,retrans=10
options set.
-Joey
On
Thanks Joey. We've had enough problems with nfs (mainly under very high
load) that we thought it might be riskier to use it for the NN.
randy
On 12/07/2011 06:46 PM, Joey Echeverria wrote:
Hey Rand,
It will mark that storage directory as failed and ignore it from then
on. In order to do this
Randy,
On recent releases (CDH3u2 here for example), you also have
dfs.name.dir.restore, a boolean flag that will automatically try to
enable previously failed name directories upon every checkpoint if
possible. Hence if you have a SNN running, and your NFS failed at some
point and got marked as
Hi ,
Anyone else have the experience integrating snappy into hadoop ? help me with
it
I find google doesn't provide the hadoop-snappy now :
Hadoop-snappy is integrated into Hadoop Common(JUN 2011).
Hadoop-Snappy can be used as an add-on for recent (released) versions of Hadoop
that do
I had to struggle a bit while building Snappy for Hadoop 0.20.2 on Ubuntu.
However, I have now been able to install it on a 10 node cluster and it
works great for map output compression. Please check these notes, may be it
might help in addition to official Hadoop-Snappy notes.
Tom White (amongst
Hi all,
I have a 8 slaves cluster on Amazon Ec2. It was idle today but I found one
node died somehow. I couldn't figure out why. Below is some relevant log
info. Any input is appreciated.
jobtracker.log (only one entry)
2011-12-07 15:48:52,661 INFO org.apache.hadoop.mapred.JobTracker: Lost
Hi Prashant Kommireddi,
Last week, I read build-hadoop-from-source and follow it, but I failed to
compile hbase with mvn compile -Dsnappy . Did you install HBase 0.90.2?
According build-hadoop-from-source HBase 0.90.2 is incompatible with
Hadoop0.20.2-release.
-Original Message-
From:
I have not tried it with HBase, and yes 0.20.2 is not compatible with it.
What is the error you receive when you try compiling Snappy? I don't think
compiling Snappy would be dependent on HBase.
2011/12/7 Jinyan Xu jinyan...@exar.com
Hi Prashant Kommireddi,
Last week, I read
Jie,
When you say a node died, do you mean to say hadoop's services alone died or
the whole node itself went down and came back up (uptime can tell, perhaps)?
On 08-Dec-2011, at 8:09 AM, Jie Li wrote:
Hi all,
I have a 8 slaves cluster on Amazon Ec2. It was idle today but I found one
node
Hi Maneesh,
Great work, without your comic strip, I took 1 week, read couple of
articles on hadoop and gone through hadoop definite guide to understand
these concepts.
Cheers!
-Idris
On Thu, Dec 8, 2011 at 12:32 AM, maneesh varshney mvarsh...@gmail.comwrote:
Hi Jaganadh
I am the author of
hi, i am reading hadoop-0.23 source code, mainly focusing on hadoop yarn.
However i have some problems in reading the source code.
There is no Debugging tools for hadoop, so i can't track the code execution
flow. Therefore i can't understand the code quickly since there are lots of
overwirte,
Hi,Jing,
you can run ant eclipse-files at command line.And then you can import
it to your eclipse.
I think you can read the code from JobTracker and TaskTracker .you can find the
main() function in these class.
在 2011-12-8,下午1:50, 陈竞 写道:
hi, i am reading hadoop-0.23 source code,
my problem is that there are many definitions of one function, i can't get
which function it really use even in eclipse , since there are many
overwrite, so i want to track it. is there some tools like gdb in java for
runing hadoop?
在 2011年12月8日 下午1:59,wang xin wangxin0072...@gmail.com写道:
Hi guys !
I was trying to generate job trace and topology trace.
I have hadoop set up for hduser at /usr/local/hadoop and ran wordcount
program as hduser .
I have mapreduce component set up in eclipse for user arun.
I set for a configuration :
Class: org.apache.hadoop.tools.rumen.TraceBuilder
You can run daemons from within Eclipse in debugging mode -- you only need to
launch the right main class (NameNode, DataNode, etc. for example). This is a
feature of Eclipse. But distributed programming is best debugged with proper
logging, if you can't afford running all the daemons and the
On Thu, Dec 8, 2011 at 12:32 AM, maneesh varshney mvarsh...@gmail.comwrote:
Hi Jaganadh
I am the author of this comic strip. Please feel free to re-distribute it
as you see fit.. I assign the content under Creative Commons Attribution
Share-Alike.
Hi Maneesh
Kudos for your wonderful
Hi Maneesh
I am sharing it my blog and slideshre account.
Here is the link
http://jaganadhg.freeflux.net/blog/archive/2011/12/08/hadoop-comic-by-maneesh-varshney.html
http://www.slideshare.net/jaganadhg/hdfs-10509123
--
**
JAGANADH G
http://jaganadhg.in
*ILUGCBE*
35 matches
Mail list logo