Bryan Duxbury wrote:
Nobody has any ideas about this?

-Bryan

On May 13, 2008, at 11:27 AM, Bryan Duxbury wrote:

I'm trying to create a java application that writes to HDFS. I have it set up such that hadoop-0.16.3 is on my machine, and the env variables HADOOP_HOME and HADOOP_CONF_DIR point to the correct respective directories. My app lives elsewhere, but generates it's classpath by looking in those environment variables. Here's what my generated classpath looks like:

/Users/bryanduxbury/hadoop-0.16.3/conf:/Users/bryanduxbury/hadoop-0.16.3/hadoop-0.16.3-core.jar:/Users/bryanduxbury/hadoop-0.16.3/hadoop-0.16.3-test.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/commons-cli-2.0-SNAPSHOT.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/commons-codec-1.3.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/commons-httpclient-3.0.1.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/commons-logging-1.0.4.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/commons-logging-api-1.0.4.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/jets3t-0.5.0.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/jetty-5.1.4.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/jetty-ext/commons-el.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/jetty-ext/jasper-compiler.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/jetty-ext/jasper-runtime.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/jetty-ext/jsp-api.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/junit-3.8.1.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/kfs-0.1.jar:/Users/bryanduxbury/hadoop-0.16.3/li
b/log4j-1.2.13.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/servlet-api.jar:/Users/bryanduxbury/hadoop-0.16.3/lib/xmlenc-0.52.jar:/Users/bryanduxbury/projects/hdfs_collector/lib/jtestr-0.2.jar:/Users/bryanduxbury/projects/hdfs_collector/lib/jvyaml.jar:/Users/bryanduxbury/projects/hdfs_collector/lib/libthrift.jar:/Users/bryanduxbury/projects/hdfs_collector/build/hdfs_collector.jar


The problem I have is that when I go to get a FileSystem object for my file:/// files (for testing locally), I'm getting errors like this:

   [jtestr] java.io.IOException: No FileSystem for scheme: file
[jtestr] org/apache/hadoop/fs/FileSystem.java:1179:in `createFileSystem'
   [jtestr]       org/apache/hadoop/fs/FileSystem.java:55:in `access$300'
   [jtestr]       org/apache/hadoop/fs/FileSystem.java:1193:in `get'
   [jtestr]       org/apache/hadoop/fs/FileSystem.java:150:in `get'
   [jtestr]       org/apache/hadoop/fs/FileSystem.java:124:in `getNamed'
   [jtestr]       org/apache/hadoop/fs/FileSystem.java:96:in `get'


Filesystems get located by looking up your conf file for the mapping from url schema to classname. if a filesystem isnt found, it means your hadoop-default.xml isnt being picked up (its in the root of hadoop-core), or a different version is being picked up, which contains invalid data.

this could be g good oppportunity for you to write some extra diagnostics for hadoop, something to dump all the job conf information on the client, and even validate a conf to look for important values.

steve

Reply via email to