Hi,
 This is the problem with the hadoop 17.2 release. It creates _logs
directory in the output directory of any map red job. This _logs is not
your output file. So, You have to explicitly make sure that this file
shouldn't be read by your map job.  

Thanks
Pallavi

-----Original Message-----
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Chris
Dyer
Sent: Thursday, September 18, 2008 10:20 AM
To: core-user@hadoop.apache.org
Subject: Trouble with SequenceFileOutputFormat.getReaders

Hi all-
I am having trouble with SequenceFileOutputFormat.getReaders on a
hadoop 17.2 cluster.  I am trying to open a set of SequenceFiles that
was created in one map process that has completed from within a second
map process by passing in the job configuration for the running map
process (not of the map process that created the set of sequence
files) and the path to the output.  When I run locally, this works
fine, but when I run remotely on the cluster (using HDFS on the
cluster), I get the following IOException:

java.io.IOException: Cannot open filename
/user/redpony/Model1.data.0/_logs

However, the following works:

hadoop dfs -ls /user/redpony/Model1.data.0/_logs
Found 1 items
/user/redpony/Model1.data.0/_logs/history       <dir>
2008-09-18
00:43   rwxrwxrwx       redpony supergroup

This is probably something dumb, and quite likely related to me not
having my settings configured properly, but I'm completely at a loss
for how to proceed.  Any ideas?

Thanks!
Chris

Reply via email to