For more understanding the flows, i would recommend you to go through once 
below docs
http://hadoop.apache.org/common/docs/r0.16.4/hdfs_design.html#The+File+System+Namespace

Regards,
Uma

----- Original Message -----
From: Uma Maheswara Rao G 72686 <mahesw...@huawei.com>
Date: Wednesday, September 21, 2011 2:36 pm
Subject: Re: Any other way to copy to HDFS ?
To: common-user@hadoop.apache.org

> 
> Hi,
> 
> You need not copy the files to NameNode.
> 
> Hadoop provide Client code as well to copy the files.
> To copy the files from other node ( non dfs), you need to put the 
> hadoop**.jar's into classpath and use the below code snippet.
> 
> FileSystem fs =new DistributedFileSystem();
> fs.initialize("NAMENODE_URI", configuration);
> 
> fs.copyFromLocal(srcPath, dstPath);
> 
> using this API, you can copy the files from any machine. 
> 
> Regards,
> Uma
> 
> 
> 
> 
> 
> ----- Original Message -----
> From: praveenesh kumar <praveen...@gmail.com>
> Date: Wednesday, September 21, 2011 2:14 pm
> Subject: Any other way to copy to HDFS ?
> To: common-user@hadoop.apache.org
> 
> > Guys,
> > 
> > As far as I know hadoop, I think, to copy the files to HDFS, 
> first 
> > it needs
> > to be copied to the NameNode's local filesystem. Is it right ??
> > So does it mean that even if I have a hadoop cluster of 10 nodes 
> with> overall capacity of 6TB, but if my NameNode's hard disk 
> capacity 
> > is 500 GB,
> > I can not copy any file to HDFS greater than 500 GB ?
> > 
> > Is there any other way to directly copy to HDFS without copy the 
> > file to
> > namenode's local filesystem ?
> > What can be other ways to copy large files greater than 
> namenode's 
> > diskcapacity ?
> > 
> > Thanks,
> > Praveenesh.
> > 
> 

Reply via email to