Thanks for the response craig.
I looked at fuse-dfs c code and looks like it does not like anything other than "dfs:// " so with the fact that hadoop can connect to S3 file system ..allowing s3 scheme should solve my problem?

Roopa

On Jan 28, 2009, at 1:03 PM, Craig Macdonald wrote:

Hi Roopa,

I cant comment on the S3 specifics. However, fuse-dfs is based on a C interface called libhdfs which allows C programs (such as fuse- dfs) to connect to the Hadoop file system Java API. This being the case, fuse-dfs should (theoretically) be able to connect to any file system that Hadoop can. Your mileage may vary, but if you find issues, please do report them through the normal channels.

Craig


Roopa Sudheendra wrote:
I am experimenting with Hadoop backed by Amazon s3 filesystem as one of our backup storage solution. Just the hadoop and s3(block based since it overcomes the 5gb limit) so far seems to be fine. My problem is that i want to mount this filesystem using fuse-dfs ( since i don't have to worry about how the file is written on the system ) . Since the namenode does not get started with s3 backed hadoop system how can i connect fuse-dfs to this setup.

Appreciate your help.
Thanks,
Roopa


Reply via email to