Hey thanks Aastha, Kartheek,
Its working fine now. I am able to copy files and folders using fuse-dfs. There
was some issue with my HDFS because of which I was facing issue.
Thanks :)
From: Aastha Mehta [mailto:aasth...@gmail.com]
Sent: Friday, December 16, 2011 1:03 PM
To:
How long did you wait after copying? I've seen this behavior before and it's
due to the semantics of close in fuse and not easily fixed in fuse-dfs. In a
minute or so though the copy should have the right size.
-Joey
On Dec 16, 2011, at 1:55, Stuti Awasthi stutiawas...@hcl.com wrote:
Hi
Rather the problem that I have seen is that we need to wait for the
datanode to be properly set up and registered with the namenode. This
usually takes about 5min time. After that only, I proceed with any
operations on fuse-dfs.
Thanks.
On 16 December 2011 15:42, Joey Echeverria
Hi Joey,
Well once I rectified the issue, it was not more than few secs(5sec or so )
that some KB of video file was copied via fuse-dfs.
-Original Message-
From: Joey Echeverria [mailto:j...@cloudera.com]
Sent: Friday, December 16, 2011 3:43 PM
To: hdfs-user@hadoop.apache.org
Subject:
Hi Alo,
I copied the file using hadoop user. I am running my Hadoop using the same
user and used fuse with same user. I have mounted on Linux for now.
-Original Message-
From: alo alt [mailto:wget.n...@googlemail.com]
Sent: Friday, December 16, 2011 4:48 PM
To:
Yes you can use utility methods from IOUtils
ex:
FileOutputStream fo = new FileOutputStream (file);
IOUtils.copyBytes(fs.open(fileName), fo, 1024, true);
here fs is DFS stream.
other option is, you can make use of FileSystem apis.
EX:
FileSystem fs=new DistributedFileSystem();
Hi, I created this page http://wiki.apache.org/hadoop/BuildFuseDfs023,
hopefully soon obsoleted by the associated patch
https://issues.apache.org/jira/browse/HDFS-2696.
Best,
Petru
On Dec 9, 2011, at 7:02 PM, Arun C Murthy wrote:
Petru,
This is incredibly useful, thanks!
Do you mind
Hi. Sorry for my ignorance or if I missed the answer to
this question in the docs.
Does HDFS NFS mount balance the load like it would for
regular usage from within a hadoop cluster?
It splits up the files and moves them to the node nearest to
the NFS client?
Or does all the NFS network
HDFS doesn't natively support NFS. In order to export HDFS via NFS you'd have
to mount it to the local file system with fuse and then export that directory.
In that case, all traffic would go through the host acting as the NFS server.
-Joey
On Dec 16, 2011, at 19:07, Mark Hedges
Joey is speaking precisely, but in an intentionally very limited way.
Apache HDFS, the file system that comes with Apache Hadoop does not
support NFS.
On the other hand, maprfs which is a part of the commercial MapR
distribution which is based on Apache Hadoop does support NFS natively and
10 matches
Mail list logo