Sam,

Have you tried removing the trailing '/' in the value of fs.default.name?
Also, I believe that is deprecated and should be replaced by fs.defaultFS.

-Abe


On Sun, Jul 21, 2013 at 10:42 PM, sam liu <[email protected]> wrote:

> Hi Jarek,
>
> Sqoop import tool still failed on my hadoop 2.x cluster, and above is my
> core-site.xml. can you help take a look at it?
>
> Thanks!
>
>
> 2013/7/19 sam liu <[email protected]>
>
>> Hi Jarek,
>>
>> Yes, the hdfs-site.xml file is in ${HADOOP_HOME}/etc/hadoop. I also added
>> hadoop related jars into CLASSPATH:
>> '/home/hadoop-2.0.3-alpha/share/hadoop/common/hadoop-common-2.0.3-alpha.jar:/home/hadoop-2.0.3-alpha/share/hadoop/mapreduce/*.jar:/home/hadoop-2.0.3-alpha/share/hadoop/yarn/*.jar:/home/hadoop-2.0.3-alpha/share/hadoop/hdfs/hadoop-hdfs-2.0.3-alpha.jar:...',
>> but Sqoop import tool still failed due to same exception.
>>
>> Below is my core-site.xml:
>> <configuration>
>>   <property>
>>     <name>fs.default.name</name>
>>     <value>hdfs://namenode_hostname:9010/</value>
>>   </property>
>>
>>   <property>
>>      <name>hadoop.tmp.dir</name>
>>      <value>/home/temp/hadoop/core_temp</value>
>>   </property>
>>
>> </configuration>
>>
>>
>> Thanks!
>>
>>
>> 2013/7/17 Jarek Jarcec Cecho <[email protected]>
>>
>>> Hi Sam,
>>> thank you for sharing the details. I'm assuming that the hdfs-site.xml
>>> file is in ${HADOOP_HOME}/etc/hadoop, is that correct? As the hdfs-site.xml
>>> contains more server side configuration of HDFS, I would be more interested
>>> to know the content of core-site.xml file. I would suggest to explore the
>>> classpath that Sqoop will end up having (for example using jinfo) and
>>> verify whether there are HDFS jars available.
>>>
>>> Jarcec
>>>
>>> On Wed, Jul 17, 2013 at 10:54:05AM +0800, sam liu wrote:
>>> > Hi Jarek,
>>> >
>>> > Below are my configurations:
>>> >
>>> > 1) Env Parameters:
>>> > export HADOOP_HOME=/opt/hadoop-2.0.3-alpha
>>> > export PATH=$HADOOP_HOME/bin:$PATH
>>> > export PATH=$HADOOP_HOME/sbin:$PATH
>>> > export HADOOP_MAPARED_HOME=${HADOOP_HOME}
>>> > export HADOOP_COMMON_HOME=${HADOOP_HOME}
>>> > export HADOOP_HDFS_HOME=${HADOOP_HOME}
>>> > export YARN_HOME=${HADOOP_HOME}
>>> > export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
>>> > export HDFS_CONF_DIR=${HADOOP_HOME}/etc/hadoop
>>> > export YARN_CONF_DIR=${HADOOP_HOME}/etc/hadoop
>>> >
>>> > 2) hdfs-site.xml:
>>> > <?xml version="1.0"?>
>>> > <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
>>> >
>>> > <configuration>
>>> >
>>> >   <property>
>>> >     <name>dfs.replication</name>
>>> >     <value>1</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>dfs.name.dir</name>
>>> >     <value>/home/temp/hadoop/dfs_name_dir</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>dfs.data.dir</name>
>>> >     <value>/home/temp/hadoop/dfs_data_dir</value>
>>> >   </property>
>>> >
>>> >   <property>
>>> >     <name>dfs.webhdfs.enabled</name>
>>> >     <value>true</value>
>>> >   </property>
>>> > </configuration>
>>> >
>>> >
>>> >
>>> >
>>> > 2013/7/17 Jarek Jarcec Cecho <[email protected]>
>>> >
>>> > > Hi sir,
>>> > > the exception is suggesting that FileSystem implementation for your
>>> > > default FS can't be found. I would check HDFS configuration to
>>> ensure that
>>> > > it's configured properly and that Sqoop is properly picking the
>>> > > configuration and all the HDFS libraries.
>>> > >
>>> > > Jarcec
>>> > >
>>> > > On Tue, Jul 16, 2013 at 11:29:07AM +0800, sam liu wrote:
>>> > > > I also tried another sqoop build
>>> sqoop-1.4.2.bin__hadoop-2.0.0-alpha on
>>> > > > hadoop 2.0.3-alpha, but failed as well. The exception is same as
>>> above:
>>> > > > 'java.lang.UnsupportedOperationException: Not implemented by the
>>> > > > DistributedFileSystem FileSystem implementation'.
>>> > > >
>>> > > > This issue blocks me quite a while...
>>> > > >
>>> > > >
>>> > > > 2013/6/21 Abraham Elmahrek <[email protected]>
>>> > > >
>>> > > > > Hey Sam,
>>> > > > >
>>> > > > > My understanding is that Sqoop 1.4.3 should work with Hadoop
>>> 2.0.x
>>> > > (which
>>> > > > > would include Hadoop 2.0.4 alpha). Any ways, there seems to be
>>> some
>>> > > version
>>> > > > > conflicting going on here. Do you have any other builds of sqoop
>>> > > installed?
>>> > > > >
>>> > > > > -Abe
>>> > > > >
>>> > > > >
>>> > > > > On Thu, Jun 20, 2013 at 6:39 PM, sam liu <[email protected]
>>> >
>>> > > wrote:
>>> > > > >
>>> > > > >> Anyone could provide a answer? We are making decision whether to
>>> > > leverage
>>> > > > >> Sqoop 1.4.3 on yarn or not.
>>> > > > >>
>>> > > > >> Thanks!
>>> > > > >>
>>> > > > >>
>>> > > > >> 2013/6/20 sam liu <[email protected]>
>>> > > > >>
>>> > > > >>> Hi,
>>> > > > >>>
>>> > > > >>> Sqoop website says Sqoop 1.4.3 support Hadoop 2.0, but I
>>> failed to
>>> > > run
>>> > > > >>> import tool against hadoop-2.0.4-alpha, using
>>> > > > >>> sqoop-1.4.3.bin__hadoop-2.0.0-alpha. Can anyone help provide
>>> > > > >>> triage/suggestion? Thanks in advance!
>>> > > > >>>
>>> > > > >>> - Command:
>>> > > > >>> sqoop import --connect jdbc:db2://host:50000/SAMPLE --table
>>> > > > >>> DB2ADMIN.DB2TEST_TBL001 --username user --password pwd -m 1
>>> > > --target-dir
>>> > > > >>> /tmp/DB2TEST_TBL001
>>> > > > >>>
>>> > > > >>> - Exception:
>>> > > > >>> 13/06/19 23:28:28 INFO manager.SqlManager: Executing SQL
>>> statement:
>>> > > > >>> SELECT t.* FROM DB2ADMIN.DB2TEST_TBL001 AS t WHERE 1=0
>>> > > > >>> 13/06/19 23:28:28 ERROR sqoop.Sqoop: Got exception running
>>> Sqoop:
>>> > > > >>> java.lang.UnsupportedOperationException: Not implemented by the
>>> > > > >>> DistributedFileSystem FileSystem implementation
>>> > > > >>> java.lang.UnsupportedOperationException: Not implemented by the
>>> > > > >>> DistributedFileSystem FileSystem implementation
>>> > > > >>>         at
>>> > > org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:207)
>>> > > > >>>         at
>>> > > > >>>
>>> org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2245)
>>> > > > >>>         at
>>> > > > >>>
>>> > >
>>> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2255)
>>> > > > >>>         at
>>> > > > >>>
>>> > >
>>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2272)
>>> > > > >>>         at
>>> > > org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
>>> > > > >>>         at
>>> > > > >>>
>>> > >
>>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2311)
>>> > > > >>>         at
>>> > > > >>> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2293)
>>> > > > >>>         at
>>> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:317)
>>> > > > >>>         at
>>> > > org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:288)
>>> > > > >>>         at
>>> > > org.apache.sqoop.mapreduce.JobBase.cacheJars(JobBase.java:134)
>>> > > > >>>         at
>>> > > > >>>
>>> > >
>>> org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:197)
>>> > > > >>>         at
>>> > > > >>>
>>> org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:413)
>>> > > > >>>         at
>>> > > > >>>
>>> org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:380)
>>> > > > >>>         at
>>> org.apache.sqoop.tool.ImportTool.run(ImportTool.java:453)
>>> > > > >>>         at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
>>> > > > >>>         at
>>> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>> > > > >>>         at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
>>> > > > >>>         at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
>>> > > > >>>         at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
>>> > > > >>>         at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
>>> > > > >>>         at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
>>> > > > >>>
>>> > > > >>
>>> > > > >>
>>> > > > >>
>>> > > > >> --
>>> > > > >>
>>> > > > >> Sam Liu
>>> > > > >>
>>> > > > >
>>> > > > >
>>> > > >
>>> > > >
>>> > > > --
>>> > > >
>>> > > > Sam Liu
>>> > >
>>> >
>>> >
>>> >
>>> > --
>>> >
>>> > Sam Liu
>>>
>>
>>
>>
>> --
>>
>> Sam Liu
>>
>
>
>
> --
>
> Sam Liu
>

Reply via email to