[
https://issues.apache.org/jira/browse/SQOOP-1226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15773706#comment-15773706
]
Tom Harrison commented on SQOOP-1226:
-------------------------------------
I had the same problem as Jordi (CryptoFileLoader) and the workaround in
description, adding property `fs.hdfs.impl.disable.cache` worked in Sqoop 1.4.6
(Thanks!)
> --password-file option triggers FileSystemClosed exception at end of Oozie
> action
> ---------------------------------------------------------------------------------
>
> Key: SQOOP-1226
> URL: https://issues.apache.org/jira/browse/SQOOP-1226
> Project: Sqoop
> Issue Type: Bug
> Affects Versions: 1.4.3
> Environment: Centos 6.2 + jdk-1.6.0_31-fcs.x86_64
> Reporter: David Morel
> Assignee: Jarek Jarcec Cecho
> Fix For: 1.4.5
>
> Attachments: SQOOP-1226.patch, SQOOP-1226.patch
>
>
> When using the --password-file option, a Sqoop action running inside an Oozie
> workflow will ERROR out at the very end, like so:
> {noformat}
> 2013-10-31 13:38:45,095 INFO org.apache.sqoop.hive.HiveImport: Hive import
> complete.
> 2013-10-31 13:38:45,098 INFO org.apache.sqoop.hive.HiveImport: Export
> directory is empty, removing it.
> 2013-10-31 13:38:45,213 INFO org.apache.hadoop.mapred.TaskLogsTruncater:
> Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
> 2013-10-31 13:38:45,217 ERROR
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
> as:mapred (auth:SIMPLE) cause:java.io.IOException: Filesystem closed
> 2013-10-31 13:38:45,218 WARN org.apache.hadoop.mapred.Child: Error running
> child
> java.io.IOException: Filesystem closed
> at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:565)
> at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:589)
> at java.io.FilterInputStream.close(FilterInputStream.java:155)
> at org.apache.hadoop.util.LineReader.close(LineReader.java:149)
> at
> org.apache.hadoop.mapred.LineRecordReader.close(LineRecordReader.java:243)
> at
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.close(MapTask.java:222)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:421)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
> at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
> at org.apache.hadoop.mapred.Child.main(Child.java:262)
> 2013-10-31 13:38:45,234 INFO org.apache.hadoop.mapred.Task: Runnning cleanup
> for the task
> {noformat}
> With the --password option, the job completes with no error. I believe the
> --password-file option handling closes the FS which happens to be shared with
> the Oozie launcher, which can't write to it anymore on completion. The
> solution I found was adding:
> {noformat}
> <property>
> <name>fs.hdfs.impl.disable.cache</name>
> <value>true</value>
> </property>
> {noformat}
> in the sqoop action definition in the oozie workflow, and that works, but
> isn't really handy.
> Details are at
> https://groups.google.com/a/cloudera.org/d/msg/cdh-user/pdsxiy5C_IY/OD8wR0rhHgMJ
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)