[ 
https://issues.apache.org/jira/browse/SQOOP-3136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15882076#comment-15882076
 ] 

Attila Szabo commented on SQOOP-3136:
-------------------------------------

Hey [~yalovyyi],

Many thanks for your contribution!

[~maugli]

> Sqoop should work well with not default file systems
> ----------------------------------------------------
>
>                 Key: SQOOP-3136
>                 URL: https://issues.apache.org/jira/browse/SQOOP-3136
>             Project: Sqoop
>          Issue Type: Improvement
>          Components: connectors/hdfs
>    Affects Versions: 1.4.5
>            Reporter: Illya Yalovyy
>            Assignee: Illya Yalovyy
>         Attachments: SQOOP-3136.patch
>
>
> Currently Sqoop assumes default file system when it comes to IO operations. 
> It makes it hard to use other FileSystem implementations as source or 
> destination. Here is an example:
> {code}
> sqoop import --connect <JDBC CONNECTION> --table table1 --driver <JDBC 
> DRIVER> --username root --password **** --delete-target-dir --target-dir 
> s3a://some-bucket/tmp/sqoop
> ...
> 17/02/15 19:16:59 ERROR tool.ImportTool: Imported Failed: Wrong FS: 
> s3a://some-bucket/tmp/sqoop, expected: hdfs://<DNS>:8020
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to