[ 
https://issues.apache.org/jira/browse/CONNECTORS-858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13873424#comment-13873424
 ] 

Karl Wright edited comment on CONNECTORS-858 at 1/16/14 2:20 PM:
-----------------------------------------------------------------

Ok, some insight.

This problem has been seen by others.  See: 
http://stackoverflow.com/questions/20355176/no-filesystem-for-scheme-hdfs-ioexception-in-hadoop-2-2-0-wordcount-example

The code where it fails for the above case is here:

{code}
  public static Class<? extends FileSystem> getFileSystemClass(String scheme,
      Configuration conf) throws IOException {
    if (!FILE_SYSTEMS_LOADED) {
      loadFileSystems();
    }
    Class<? extends FileSystem> clazz = null;
    if (conf != null) {
      clazz = (Class<? extends FileSystem>) conf.getClass("fs." + scheme + 
".impl", null);
    }
    if (clazz == null) {
      clazz = SERVICE_FILE_SYSTEMS.get(scheme);
    }
    if (clazz == null) {
      throw new IOException("No FileSystem for scheme: " + scheme);
    }
    return clazz;
  }
{code}

SERVICE_FILE_SYSTEMS is a static registry, but that's not the interesting part. 
 By default, this class uses java.util.ServiceLoader to locate all the 
FileSystem.class services.  ServiceLoader interacts with the classloader, 
though, as described here: 
http://docs.oracle.com/javase/6/docs/api/java/util/ServiceLoader.html .  So the 
jar manifest for hadoop-common should point us at the implementation class for 
the hdfs scheme - and then we can see why it isn't being found.



was (Author: [email protected]):
Ok, some insight.

This problem has been seen by others.  See: 
http://stackoverflow.com/questions/20355176/no-filesystem-for-scheme-hdfs-ioexception-in-hadoop-2-2-0-wordcount-example

The code where it fails for the above case is here:

{code}
  public static Class<? extends FileSystem> getFileSystemClass(String scheme,
      Configuration conf) throws IOException {
    if (!FILE_SYSTEMS_LOADED) {
      loadFileSystems();
    }
    Class<? extends FileSystem> clazz = null;
    if (conf != null) {
      clazz = (Class<? extends FileSystem>) conf.getClass("fs." + scheme + 
".impl", null);
    }
    if (clazz == null) {
      clazz = SERVICE_FILE_SYSTEMS.get(scheme);
    }
    if (clazz == null) {
      throw new IOException("No FileSystem for scheme: " + scheme);
    }
    return clazz;
  }
{code}

SERVICE_FILE_SYSTEMS is a static registry, but that's not the interesting part. 
 By default, this class uses java.util.ServiceLoader to locate all the 
FileSystem.class services.  ServiceLoader interacts with the classloader, 
though, as described here: 
http://docs.oracle.com/javase/6/docs/api/java/util/ServiceLoader.html .  So the 
jar manifest for hadoop-common should point us at the implementation class for 
the hdfs scheme - and then we can see why it isn't being found.


What I think we should do is

> IO exception: No FileSystem for scheme: hdfs
> --------------------------------------------
>
>                 Key: CONNECTORS-858
>                 URL: https://issues.apache.org/jira/browse/CONNECTORS-858
>             Project: ManifoldCF
>          Issue Type: Bug
>          Components: HDFS connector
>    Affects Versions: ManifoldCF 1.5
>            Reporter: Minoru Osuka
>            Assignee: Minoru Osuka
>             Fix For: ManifoldCF 1.5
>
>
> Exception occurs in HDFS Connector.
> Connection status:    Connection temporarily failed: IO exception: No 
> FileSystem for scheme: hdfs



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to