Fuse is definitely nice for many situations where you want to be able to use regular unix file paths to navigate HDFS (say from automation programs.) I would be cautious though in circumstances where you want to *immediately* access data that's been copied into HDFS, or where you're depending on a file *actually* being gone from HDFS that you deleted.
For such operations you're probably better off using HDFS shell commands, since they will take effect immediately. There's a synchronization dance between fuse and HDFS that can add a minute or more to the execution time when you do stuff like: cp -p /tmp/stage/AdminLog.log /mnt/hdfs/user/data/staging or rm -f /mnt/hdfs/user/data/staging/* Cheers, Ken Barclay Integration Engineer Wells Fargo Bank - ISD | 45 Fremont Street, 10th Floor | San Francisco, CA 94105 MAC A0194-100 -----Original Message----- From: ed [mailto:hadoopn...@gmail.com] Sent: Thursday, September 30, 2010 5:44 AM To: common-user@hadoop.apache.org Subject: Re: Read/Writing into HDFS I haven't tried it out yet but you theoretically can mount HDFS as a standard file system in linux using Fuse http://wiki.apache.org/hadoop/MountableHDFS If you're using Cloudera's distro of Hadoop it should come with fuse prepackaged for you: https://wiki.cloudera.com/display/DOC/Mountable+HDFS ~Ed On Thu, Sep 30, 2010 at 7:59 AM, Adarsh Sharma <adarsh.sha...@orkash.com>wrote: > Dear all, > I have set up a Hadoop cluster of 10 nodes. > I want to know that how we can read/write file from HDFS (simple). > Yes I know there are commands, i read the whole HDFS commands. > bin/hadoop -copyFromLocal tells that the file should be in localfilesystem. > > But I want to know that how we can read these files from the cluster. > What are the different ways to read files from HDFS. > *Can a extra node ( other than the cluster nodes ) read file from the > cluster. > If yes , how? > > *Thanks in Advance* > * >