Mark:

You have a few options. You can:

1. Package dependent jars in a lib/ directory of the jar file.
2. Use something like Maven's assembly plugin to build a self contained jar.

Either way, I'd strongly recommend using something like Maven to build your
artifacts so they're reproducible and in line with commonly used tools. Hand
packaging files tends to be error prone. This is less of a Hadoop-ism and
more of a general Java development issue, though.

On Fri, Feb 18, 2011 at 5:18 PM, Mark Kerzner <markkerz...@gmail.com> wrote:

> Hi,
>
> I have a script that I use to re-package all the jars (which are output in
> a
> dist directory by NetBeans) - and it structures everything correctly into a
> single jar for running a MapReduce job. Here it is below, but I am not sure
> if it is the best practice. Besides, it hard-codes my paths. I am sure that
> there is a better way.
>
> #!/bin/sh
> # to be run from the project directory
> cd ../dist
> jar -xf MR.jar
> jar -cmf META-INF/MANIFEST.MF  /home/mark/MR.jar *
> cd ../bin
> echo "Repackaged for Hadoop"
>
> Thank you,
> Mark
>



-- 
Eric Sammer
twitter: esammer
data: www.cloudera.com

Reply via email to