Not sure if anyone else answered...

1. You need to run hadoop dfsadmin -finalizeUpgrade.  Be careful, because you 
can't go back once you do this.

http://wiki.apache.org/hadoop/Hadoop_Upgrade

I don't know about 2.

-Michael

On 12/3/08 5:49 PM, "Songting Chen" <[EMAIL PROTECTED]> wrote:

1. The namenode webpage shows:

   Upgrades: Upgrade for version -18 has been completed.
   Upgrade is not finalized.

2. SequenceFile.Writer failed when trying to creating a new file with the 
following error: (we have two HaDoop clusters, both have issue 1; one has issue 
2, but the other is fine on issue 2). Any idea what's going on?

Thanks,
-Songting

java.io.IOException: Filesystem closed
        at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:198)
        at org.apache.hadoop.hdfs.DFSClient.access$600(DFSClient.java:65)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3084)
        at 
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3053)
        at 
org.apache.hadoop.hdfs.DFSClient$LeaseChecker.close(DFSClient.java:942)
        at org.apache.hadoop.hdfs.DFSClient.close(DFSClient.java:210)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:243)
        at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:1413)
        at org.apache.hadoop.fs.FileSystem.closeAll(FileSystem.java:236)
        at 
org.apache.hadoop.fs.FileSystem$ClientFinalizer.run(FileSystem.java:221)



Reply via email to