Hi Zhang,
I didn't play much with fuse-dfs, in my opinion, memory leak is
something solvable and I can see Apache had made some fixes for this
issue on libhdfs.
If you encounter these problems with older version of Hadoop, I think
you should give a try on the latest stable version.
Since I didn't have much fun so far with fuse-dfs, i cannot say it's the
best or not, but it's definitely better than mixing davfs2 and webdav
together.
Best,
Huy Phan
Zhang Bingjun (Eddy) wrote:
Dear Huy Phan,
Thanks for your quick reply.
I was using fuse-dfs before. But I found serious memory leak with
fuse-dfs about 10MB leakage per 10k file read/write. When the occupied
memory size reached about 150MB, the read/write performance dropped
dramatically. Did you encounter these problems?
What I am trying to do is to mount HDFS as a local directory in
Ubuntu. Do you think fuse-dfs is the best option so far?
Thank you so much for your input!
Best regards,
Zhang Bingjun (Eddy)
E-mail: eddym...@gmail.com <mailto:eddym...@gmail.com>,
bing...@nus.edu.sg <mailto:bing...@nus.edu.sg>,
bing...@comp.nus.edu.sg <mailto:bing...@comp.nus.edu.sg>
Tel No: +65-96188110 (M)
On Tue, Oct 27, 2009 at 6:55 PM, Huy Phan <dac...@gmail.com
<mailto:dac...@gmail.com>> wrote:
Hi Zhang,
Here is the patch for davfs2 to solve "server does not support
WebDAV" issue:
diff --git a/src/webdav.c b/src/webdav.c
index 8ec7a2d..4bdaece 100644
--- a/src/webdav.c
+++ b/src/webdav.c
@@ -472,7 +472,7 @@ dav_init_connection(const char *path)
if (!ret) {
initialized = 1;
- if (!caps.dav_class1 && !caps.dav_class2 &&
!ignore_dav_header) {
+ if (!caps.dav_class1 && !ignore_dav_header) {
if (have_terminal) {
error(EXIT_FAILURE, 0,
_("mounting failed; the server does not
support WebDAV"));
davfs2 and webdav is not a good mix actually, I had tried to mix
them together and the performance were really bad. With the load
test of 10 requests/s, load average on my namenode were always >
15 and it took me about 5 mins for `ls` the root directory of HDFS
during the test.
Since you're using Hadoop 0.20.1, it's better to use fusedfs
library provided in Hadoop package. You have to do some tricks to
compile fusedfs with Hadoop, otherwise it would take you a lot of
time for compiling redundant things.
Best,
Huy Phan
Zhang Bingjun (Eddy) wrote:
Dear Huy Phan and others,
Thanks a lot for your efforts in customizing the WebDav server
<http://github.com/huyphan/HDFS-over-Webdav> and make it work
for Hadoop-0.20.1.
After setting up the WebDav server, I could access it using
Cadaver client in Ubuntu without using any username password.
Operations like deleting files, etc, were working. The command
is: *cadaver http://server:9800*
However, when I was trying to mount the WebDav server using
davfs2 in Ubuntu, I always get the following error:
"mount.davfs: mounting failed; the server does not support
WebDAV".
I was promoted to input username and password like below:
had...@hdfs2:/mnt$ sudo mount.davfs
http://192.168.0.131:9800/test hdfs-webdav/
Please enter the username to authenticate with server
http://192.168.0.131:9800/test or hit enter for none.
Username: hadoop
Please enter the password to authenticate user hadoop with server
http://192.168.0.131:9800/test or hit enter for none.
Password:
mount.davfs: mounting failed; the server does not support WebDAV
Even though I have tried all possible usernames and passwords
either from the WebDAV accounts.properties file or from the
Ubuntu system of the WebDAV server, I still got this error
message.
Could you and anyone give me some hints on this problem? How
could I solve it? Very much appreciate your help!
Best regards,
Zhang Bingjun (Eddy)
E-mail: eddym...@gmail.com <mailto:eddym...@gmail.com>
<mailto:eddym...@gmail.com <mailto:eddym...@gmail.com>>,
bing...@nus.edu.sg <mailto:bing...@nus.edu.sg>
<mailto:bing...@nus.edu.sg <mailto:bing...@nus.edu.sg>>,
bing...@comp.nus.edu.sg <mailto:bing...@comp.nus.edu.sg>
<mailto:bing...@comp.nus.edu.sg <mailto:bing...@comp.nus.edu.sg>>
Tel No: +65-96188110 (M)