FileSystem objects will be cached in jvm.
When it tries to get the FS object by using Filesystem.get(..) ( sequence file 
internally will use it), it will return same fs object if scheme and authority 
is same for the uri.

 fs cache key's equals implementation is below

 static boolean isEqual(Object a, Object b) {
        return a == b || (a != null && a.equals(b));        
      }

      /** {@inheritDoc} */
      public boolean equals(Object obj) {
        if (obj == this) {
          return true;
        }
        if (obj != null && obj instanceof Key) {
          Key that = (Key)obj;
          return isEqual(this.scheme, that.scheme)
                 && isEqual(this.authority, that.authority)
                 && isEqual(this.ugi, that.ugi)
                 && (this.unique == that.unique);
        }
        return false;        
      }


I think, here some your files uri and schems are same and got the same fs 
object. When it closes first one, diffenitely other will get this exception.

Regards,
Uma

----- Original Message -----
From: Joey Echeverria <j...@cloudera.com>
Date: Thursday, September 29, 2011 10:34 pm
Subject: Re: FileSystem closed
To: common-user@hadoop.apache.org

> Do you close your FileSystem instances at all? IIRC, the FileSystem
> instance you use is a singleton and if you close it once, it's closed
> for everybody. My guess is you close it in your cleanup method and you
> have JVM reuse turned on.
> 
> -Joey
> 
> On Thu, Sep 29, 2011 at 12:49 PM, Mark question 
> <markq2...@gmail.com> wrote:
> > Hello,
> >
> >  I'm running 100 mappers sequentially on a single machine, where 
> each> mapper opens 100 files at the beginning then read one by one 
> sequentially> and closes after each one is done. After executing 6 
> mappers, the 7th gives
> > this error:
> >
> > java.io.IOException: Filesystem closed
> >    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:297)
> >    at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:426)>    at 
> java.io.FilterInputStream.close(FilterInputStream.java:155)
> >    at
> > 
> org.apache.hadoop.io.SequenceFile$Reader.close(SequenceFile.java:1653)>    at 
> Mapper_Reader20HM4.CleanUp(Mapper_Reader20HM4.java:124)
> >    at BFMapper20HM9.close(BFMapper20HM9.java:264)
> >    at BFMapRunner20HM9.run(BFMapRunner20HM9.java:95)
> >    at 
> org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:397)>   
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
> >    at org.apache.hadoop.mapred.Child$4.run(Child.java:217)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >    at
> > 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)>
>     at org.apache.hadoop.mapred.Child.main(Child.java:211)
> > java.io.IOException: Filesystem closed
> >    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:297)
> >    at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:426)>    at 
> java.io.FilterInputStream.close(FilterInputStream.java:155)
> >    at
> > 
> org.apache.hadoop.io.SequenceFile$Reader.close(SequenceFile.java:1653)>    at 
> Mapper_Reader20HM4.CleanUp(Mapper_Reader20HM4.java:124)
> >    at BFMapper20HM9.close(BFMapper20HM9.java:264)
> >    at BFMapRunner20HM9.run(BFMapRunner20HM9.java:95)
> >    at 
> org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:397)>   
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
> >    at org.apache.hadoop.mapred.Child$4.run(Child.java:217)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >    at
> > 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)>
>     at org.apache.hadoop.mapred.Child.main(Child.java:211)
> > java.io.IOException: Filesystem closed
> >    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:297)
> >    at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:426)>    at 
> java.io.FilterInputStream.close(FilterInputStream.java:155)
> >    at
> > 
> org.apache.hadoop.io.SequenceFile$Reader.close(SequenceFile.java:1653)>    at 
> Mapper_Reader20HM4.CleanUp(Mapper_Reader20HM4.java:124)
> >    at BFMapper20HM9.close(BFMapper20HM9.java:264)
> >    at BFMapRunner20HM9.run(BFMapRunner20HM9.java:95)
> >    at 
> org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:397)>   
>  at org.apache.hadoop.mapred.MapTask.run(MapTask.java:330)
> >    at org.apache.hadoop.mapred.Child$4.run(Child.java:217)
> >    at java.security.AccessController.doPrivileged(Native Method)
> >    at javax.security.auth.Subject.doAs(Subject.java:396)
> >    at
> > 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:742)>
>     at org.apache.hadoop.mapred.Child.main(Child.java:211)
> > java.io.IOException: Filesystem closed
> >    at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:297)
> >    at 
> org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:426)>    at 
> java.io.FilterInputStream.close(FilterInputStream.java:155)
> >    at
> > 
> org.apache.hadoop.io.SequenceFile$Reader.close(SequenceFile.java:1653)>    at 
> Mapper_Reader20HM4.CleanUp(Mapper_Reader20HM4.java:124)
> >    at BFMapper20HM9.close(BFMapper20HM9.java:264)
> >    at BFMapRunner20HM9.run(BFMapRunner20HM9.java:95)
> >    at 
> org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:397)>
> >
> > Can anybody give me a hint of what that could be?
> >
> > Thank you,
> > Mark
> >
> 
> 
> 
> -- 
> Joseph Echeverria
> Cloudera, Inc.
> 443.305.9434
>

Reply via email to