This sort of error will become much harder to make once we upgrade to
Hadoop 0.2 and replace most uses of java.io.File with
org.apache.hadoop.fs.Path.
Doug
[EMAIL PROTECTED] wrote:
Author: ab
Date: Wed May 3 19:42:02 2006
New Revision: 399515
URL: http://svn.apache.org/viewcvs?rev=399515&view=rev
Log:
Use the FileSystem instead of java.io.File.exists().
Modified:
lucene/nutch/trunk/src/java/org/apache/nutch/segment/SegmentReader.java
Modified:
lucene/nutch/trunk/src/java/org/apache/nutch/segment/SegmentReader.java
URL:
http://svn.apache.org/viewcvs/lucene/nutch/trunk/src/java/org/apache/nutch/segment/SegmentReader.java?rev=399515&r1=399514&r2=399515&view=diff
==============================================================================
--- lucene/nutch/trunk/src/java/org/apache/nutch/segment/SegmentReader.java
(original)
+++ lucene/nutch/trunk/src/java/org/apache/nutch/segment/SegmentReader.java Wed
May 3 19:42:02 2006
@@ -502,7 +502,7 @@
}
}
Configuration conf = NutchConfiguration.create();
- FileSystem fs = FileSystem.get(conf);
+ final FileSystem fs = FileSystem.get(conf);
SegmentReader segmentReader = new SegmentReader(conf, co, fe, ge, pa, pd,
pt);
// collect required args
switch (mode) {
@@ -529,7 +529,9 @@
File dir = new File(args[++i]);
File[] files = fs.listFiles(dir, new FileFilter() {
public boolean accept(File pathname) {
- if (pathname.isDirectory()) return true;
+ try {
+ if (fs.isDirectory(pathname)) return true;
+ } catch (IOException e) {};
return false;
}
});