For reasons that I have never bothered to investigate I have never had a cluster work when the hadoop.tmp.dir was not identical on all of the nodes.
My solution has always been to just make a symbolic link so that hadoop.tmp.dir was identical and on the machine in question really ended up in the file system/directory tree that I needed the data to appear in. Since this just works and takes a few seconds to setup, I have my reason why I never bothered to try to figure out why per machine configuration of the hadoop.tmp.dir variable doesn't seem to work for me - from 15.1 -> 19.0. On Tue, Apr 21, 2009 at 8:36 AM, Steve Loughran <ste...@apache.org> wrote: > Jim Twensky wrote: > >> Yes, here is how it looks: >> >> <property> >> <name>hadoop.tmp.dir</name> >> <value>/scratch/local/jim/hadoop-${user.name}</value> >> </property> >> >> so I don't know why it still writes to /tmp. As a temporary workaround, I >> created a symbolic link from /tmp/hadoop-jim to /scratch/... >> and it works fine now but if you think this might be a considered as a >> bug, >> I can report it. >> > > I've encountered this somewhere too; could be something is using the java > temp file API, which is not what you want. Try setting java.io.tmpdir to > /scratch/local/tmp just to see if that makes it go away > > > -- Alpha Chapters of my book on Hadoop are available http://www.apress.com/book/view/9781430219422