Thank you

I've been researching based on your opinions, and found the below two
solutions.
These are the answers for who has FileSystem.closed issue like me.


 -  close it in your cleanup method and you have JVM reuse turned on
    (mapred.job.reuse.jvm.num.tasks)

 - set "fs.hdfs.impl.disable,cache' to turn in the conf, and new instances
don't get cached.


Do you think they will work on my problem?

2012/7/12 Aniket Mokashi <aniket...@gmail.com>

> Can you share your query and use case?
>
> ~Aniket
>
>
> On Tue, Jul 10, 2012 at 9:39 AM, Harsh J <ha...@cloudera.com> wrote:
>
>> This appears to be a Hive issue (something probably called FS.close()
>> too early?). Redirecting to the Hive user lists as they can help
>> better with this.
>>
>> On Tue, Jul 10, 2012 at 9:59 PM, 안의건 <ahneui...@gmail.com> wrote:
>> > Hello. I have a problem with the filesystem closing.
>> >
>> > The filesystem was closed when the hive query is running.
>> > It is 'select' query and the data size is about 1TB.
>> > I'm using hadoop-0.20.2 and hive-0.7.1.
>> >
>> > The error log is telling that tmp file is not deleted, or the tmp path
>> > exception is occurred.
>> >
>> > Is there any hadoop configuration I'm missing?
>> >
>> > Thank you
>> >
>> > [stderr logs]
>> > org.apache.hadoop.hive.ql.metadata.HiveException: java.io.IOException:
>> > Filesystem closed
>> > at
>> >
>> org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:454)
>> > at
>> >
>> org.apache.hadoop.hive.ql.exec.FileSinkOperator.closeOp(FileSinkOperator.java:636)
>> > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:557)
>> > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
>> > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
>> > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
>> > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
>> > at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:566)
>> > at org.apache.hadoop.hive.ql.exec.ExecMapper.close(ExecMapper.java:193)
>> > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:57)
>> > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:358)
>> > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:307)
>> > at org.apache.hadoop.mapred.Child.main(Child.java:170)
>> > Caused by: java.io.IOException: Filesystem closed
>> > at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:226)
>> > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:617)
>> > at
>> >
>> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:453)
>> > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648)
>> > at org.apache.hadoop.fs.FileSystem.deleteOnExit(FileSystem.java:615)
>> > at
>> >
>> org.apache.hadoop.hive.shims.Hadoop20Shims.fileSystemDeleteOnExit(Hadoop20Shims.java:68)
>> > at
>> >
>> org.apache.hadoop.hive.ql.exec.FileSinkOperator.createBucketFiles(FileSinkOperator.java:451)
>> > ... 12 more
>>
>>
>>
>> --
>> Harsh J
>>
>
>
>
> --
> "...:::Aniket:::... Quetzalco@tl"
>

Reply via email to