I am using Hadoop indirectly through PIG, and some of the UDFs (defined
by me) need other jars at runtime (around 150) some of which have
conflicting resource names. Hence, trying to unpack all of them and
repacking into a single jar doesn't work. My solution is to create a
single top-level jar tha
Have you considered using something higher-level like PIG or Hive? Are
there reasons why you need to process at this low level?
-Original Message-
From: Aaron Baff [mailto:aaron.b...@telescope.tv]
Sent: Friday, September 10, 2010 11:50 PM
To: common-user@hadoop.apache.org
Subject: Custom
You will probably have to use distcache to distribute your jar to all
the nodes too. Read the distcache documentation; Then on each node you
can add the new jar to the java.library.path through
mapred.child.java.opts.
You need to do something like the following in mapred-site.xml, where
fs-uri is