You need to put the library jar on your classpath (eg using
HADOOP_CLASSPATH) as well. The -libjars will ship it to the cluster
and put it on the classpath of your task, but not the classpath of
your "driver" code.
-Todd
On Thu, Dec 9, 2010 at 10:29 PM, Vipul Pandey wrote:
> disclaimer : a newbi
FileSystem has a method createTempFile, or something like that. You can
pass it the option to delete the file when the jvm exits. If the jvm exits
abnormally, you may still get some files laying around, but under normal
circumstances they will get cleaned up.
On Fri, Dec 10, 2010 at 8:46 AM, Koj
Hi Eric.
Try './tmp' (or current working directory).
Koji
On 12/10/10 1:19 AM, "Eric" wrote:
Hi there,
I have a map-reduce job that processes binary files. I'm currently using /tmp/
as a temporary location to write data to and perform operations like
decompression. If a mapper fails, the t
Old bits:
Can you try adding 'org.apache.hadoop.io.
serializer.JavaSerialization,' to the following config ?
"C:\hadoop-0.20.2\src\core\core-default.xml"(87,9):
io.serializations
By default, only org.apache.hadoop.io.serializer.WritableSerialization is
included.
On Fri, Dec 10, 2010 at 7:22 A
Hi,
I know that Hadoop MR don't use the java object Serialization and use
instead the object Writable, and I understand the reasons that the Hadoop
MR team chose that.
I was doing my modifications to the Hadoop MR, and I was trying to transfer
my own object via RPC method call between the
Thanks Aaron,
I'll see if I can get started over the weekend. I take your point about
forcing use of a List.
Our solution didn't require anything particular fancy to solve the problem,
and as it implemented a version of List it too maintains the deterministic
ordering:
public class MendeleyReduc
Hi there,
I have a map-reduce job that processes binary files. I'm currently using
/tmp/ as a temporary location to write data to and perform operations like
decompression. If a mapper fails, the temporary filesare left behind on the
nodes.
Is there a way to get a temp location from Hadoop, that