[ 
https://issues.apache.org/jira/browse/SQOOP-1226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15415320#comment-15415320
 ] 

Jordi commented on SQOOP-1226:
------------------------------

The same issue happens for CryptoFileLoader.java class which extends 
FilePasswordLoader and overrides method loadPassword.

Why haven't you applied the same fix to CryptoFileLoader class?

In my project we are getting the same error but caused by CryptoFileLoader.java 
because in our case the password file also is encrypted.

We are using sqoop 1.4.6



> --password-file option triggers FileSystemClosed exception at end of Oozie 
> action
> ---------------------------------------------------------------------------------
>
>                 Key: SQOOP-1226
>                 URL: https://issues.apache.org/jira/browse/SQOOP-1226
>             Project: Sqoop
>          Issue Type: Bug
>    Affects Versions: 1.4.3
>         Environment: Centos 6.2 + jdk-1.6.0_31-fcs.x86_64
>            Reporter: David Morel
>            Assignee: Jarek Jarcec Cecho
>             Fix For: 1.4.5
>
>         Attachments: SQOOP-1226.patch, SQOOP-1226.patch
>
>
> When using the --password-file option, a Sqoop action running inside an Oozie 
> workflow will ERROR out at the very end, like so:
> {noformat}
> 2013-10-31 13:38:45,095 INFO org.apache.sqoop.hive.HiveImport: Hive import 
> complete.
> 2013-10-31 13:38:45,098 INFO org.apache.sqoop.hive.HiveImport: Export 
> directory is empty, removing it.
> 2013-10-31 13:38:45,213 INFO org.apache.hadoop.mapred.TaskLogsTruncater: 
> Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
> 2013-10-31 13:38:45,217 ERROR 
> org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException 
> as:mapred (auth:SIMPLE) cause:java.io.IOException: Filesystem closed
> 2013-10-31 13:38:45,218 WARN org.apache.hadoop.mapred.Child: Error running 
> child
> java.io.IOException: Filesystem closed
>       at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:565)
>       at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:589)
>       at java.io.FilterInputStream.close(FilterInputStream.java:155)
>       at org.apache.hadoop.util.LineReader.close(LineReader.java:149)
>       at 
> org.apache.hadoop.mapred.LineRecordReader.close(LineRecordReader.java:243)
>       at 
> org.apache.hadoop.mapred.MapTask$TrackedRecordReader.close(MapTask.java:222)
>       at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:421)
>       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
>       at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
>       at java.security.AccessController.doPrivileged(Native Method)
>       at javax.security.auth.Subject.doAs(Subject.java:396)
>       at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
>       at org.apache.hadoop.mapred.Child.main(Child.java:262)
> 2013-10-31 13:38:45,234 INFO org.apache.hadoop.mapred.Task: Runnning cleanup 
> for the task
> {noformat}
> With the --password option, the job completes with no error. I believe the 
> --password-file option handling closes the FS which happens to be shared with 
> the Oozie launcher, which can't write to it anymore on completion. The 
> solution I found was adding:
> {noformat}
>   <property>
>     <name>fs.hdfs.impl.disable.cache</name>
>     <value>true</value>
>   </property>
> {noformat}
> in the sqoop action definition in the oozie workflow, and that works, but 
> isn't really handy.
> Details are at 
> https://groups.google.com/a/cloudera.org/d/msg/cdh-user/pdsxiy5C_IY/OD8wR0rhHgMJ



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to