I have not try it yet but you can use old fsimage and editlogs to build a
new name node.
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Fri, Jun 6, 2014 at 6:26 AM, ch huang
Try not to format NN. Formatting will change the Cluster ID and Namespace ID.
Data node would have old ClusterID and Namespace ID and will not match with new
IDs on NN and hence the communication will fail.
Locate the metadata folder of old NN where edit logs, FSImages and the VERSION
file are
have you tried to run it through hadoop jar instead of directly running
it through the eclipse. it seems problem with your path.
try adding conf-site.xml using conf.addResource(new Path(Vars.HADOOP_HOME
+/conf/core-site.xml)); before passing the conf reference to the
FileSystem.get()
so that it
Also add hdfs-site.xml to the Configuration object. With this Configuration
object, create a FileSystem object with this Configuration object before
creating the Path.
On Fri, Jun 6, 2014 at 4:21 PM, Raj K Singh rajkrrsi...@gmail.com wrote:
have you tried to run it through hadoop jar instead
you can extends Configured and implements Tool interface(in your main
class) ,which can take default parameter and put your code in run method and
FileSystem fs = FileSystem.get(this.getConf())
On Fri, Jun 6, 2014 at 4:41 PM, Vinod Patil vinod.mapa...@gmail.com wrote:
Also add hdfs-site.xml
Hi Hadoop Experts,
I have to transfer 5 yrs of historical billing data to the tune of 25 -30
TB to HDFS to be later analyzed by Map Reduce program. Sqoop is out of
question as these files are not residing in OLTP and so is Flume as these
are not log files generated by App Server.
What tools are
Hi,
Flume is not limitted to long data. It has a flume-source to observe
directories and can load existing files. The morphlines from kite-sdk can help
you even transform data on the fly.
Cheers,
Mirko
Von Samsung Mobile gesendet
div Ursprüngliche Nachricht /divdivVon:
Hi,
Can somebody provide me a link on how to install and configure hadoop on
Amazon EC2 instances.
I am not a hadoop admin but I was asked to install and configure hadoop.
If possible a step by step (few steps would suffice)
Thanks in advance.
Regards
Shashi
Shashi,
how does this look to you,
http://shmsoft.blogspot.com/2014/06/how-to-build-hadoop-cluster-on-aws.html?
Mark
On Fri, Jun 6, 2014 at 11:06 AM, Shashidhar Rao raoshashidhar...@gmail.com
wrote:
Hi,
Can somebody provide me a link on how to install and configure hadoop on
Amazon EC2
also take a look at apache whirr
On 6 Jun 2014 20:46, Mark Kerzner mark.kerz...@shmsoft.com wrote:
Shashi,
how does this look to you,
http://shmsoft.blogspot.com/2014/06/how-to-build-hadoop-cluster-on-aws.html
?
Mark
On Fri, Jun 6, 2014 at 11:06 AM, Shashidhar Rao
you can follow the provided link to setup hadoop cluster very easily
http://java.dzone.com/articles/how-set-multi-node-hadoop
Raj K Singh
http://in.linkedin.com/in/rajkrrsingh
http://www.rajkrrsingh.blogspot.com
Mobile Tel: +91 (0)9899821370
On Fri,
The exception happens on hadoop 2.2 version. The whole error message is shown
below. Notice that the level is DEBUG. Not sure if such exception is serious.
=2014-06-05 14:39:31,135 DEBUG [pool-1-thread-1] (DFSInputStream.java:1095) -
Error making BlockReader. Closing stale
Can you give more details on the data that you are storing in the release data
management? And also how on how it is accessed - read, and modified?
+vinod
On Jun 2, 2014, at 5:33 AM, rahul.soa rahul@googlemail.com wrote:
Hello All,
I'm newbie to Hadoop and interested to know if hadoop
13 matches
Mail list logo