I'm experiencing the same problem.  I was hoping there were be a reply to
this.  Anyone? Bueller?

-Kim

On Fri, Jul 16, 2010 at 1:58 AM, Jamie Cockrill <jamie.cockr...@gmail.com>wrote:

> Dear All,
>
> We recently upgraded from CDH3b1 to b2 and ever since, all our
> mapreduce jobs that use the DistributedCache have failed. Typically,
> we add files to the cache prior to job startup, using
> addCacheFile(URI, conf) and then get them on the other side, using
> getLocalCacheFiles(conf). I believe the hadoop-core versions for these
> are 0.20.2+228 and +320 respectively.
>
> We then open the files and read them in using a standard FileReader,
> using the toString on the path object as the constructor parameter,
> which has worked fine up to now. However, we're now getting
> FileNotFound exceptions when the file reader tries to open the file.
>
> Unfortunately the cluster is on an airgapped network, but the
> FileNotFound line comes out like:
>
> java.io.FileNotFoundException:
>
> /tmp/hadoop-hadoop/mapred/local/taskTracker/archive/master/path/to/my/file/filename.txt/filename.txt
>
> Note, the duplication of filename.txt is deliberate. I'm not sure if
> that's strange or not as this has previously worked absolutely fine.
> Has anyone else experienced this? Apologies if this is known, I've
> only just joined the list.
>
> Many thanks,
>
> Jamie
>

Reply via email to