Hi Wasim,
Here is what you could do.
1. Deploy KFS
2. Build a hadoop-site.xml config file and set fs.default.name and other config
variables to point to KFS as described by
(http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/fs/kfs/package-summary.html)
3. If you place this hadoop-site.xml is a directory say ~/foo/myconf, then you
could use hadoop commands as ./bin/hadoop --config ~/foo/myconf fs -ls
/foo/bar.txt
4. If you want to use Hadoop FileSysemt API, just put this directory as the
first entry in your classpath, so that new configuration object loads this
hadoop-site.xml and your FileSystem API talk to KFS.
5. Alternatively you could also create an object of KosmosFileSystem, which
extends from FileSystem. Look at org.apache.hadoop.fs.kfs.KosmosFileSystem for
example.
Lohit
- Original Message
From: Wasim Bari
To: core-user@hadoop.apache.org
Sent: Tuesday, February 3, 2009 3:03:51 AM
Subject: Hadoop-KFS-FileSystem API
Hi,
I am looking to use KFS as storage with Hadoop FileSystem API.
http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/fs/kfs/package-summary.html
This page states about KFS usage with Hadoop and stated as last step to run
map/reduce tracker.
Is it necessary to turn it on?
How only storage works with FileSystem API ?
Thanks
Wasim