hi,

There is a maven plugin to package for a hadoop.
I think this is quite convenient tool to package for a hadoop.

if you are using it, add this one to your pom.xml

<plugin>
<groupId>com.github.maven-hadoop.plugin</groupId>
<artifactId>maven-hadoop-plugin</artifactId>
<version>0.20.1</version>
<configuration>
<hadoopHome>your_hadoop_home_dir</hadoopHome>
</configuration>
</plugin>

Junyoung Kim (juneng...@gmail.com)


On 02/19/2011 07:23 AM, Eric Sammer wrote:
Mark:

You have a few options. You can:

1. Package dependent jars in a lib/ directory of the jar file.
2. Use something like Maven's assembly plugin to build a self contained jar.

Either way, I'd strongly recommend using something like Maven to build your
artifacts so they're reproducible and in line with commonly used tools. Hand
packaging files tends to be error prone. This is less of a Hadoop-ism and
more of a general Java development issue, though.

On Fri, Feb 18, 2011 at 5:18 PM, Mark Kerzner<markkerz...@gmail.com>  wrote:

Hi,

I have a script that I use to re-package all the jars (which are output in
a
dist directory by NetBeans) - and it structures everything correctly into a
single jar for running a MapReduce job. Here it is below, but I am not sure
if it is the best practice. Besides, it hard-codes my paths. I am sure that
there is a better way.

#!/bin/sh
# to be run from the project directory
cd ../dist
jar -xf MR.jar
jar -cmf META-INF/MANIFEST.MF  /home/mark/MR.jar *
cd ../bin
echo "Repackaged for Hadoop"

Thank you,
Mark



Reply via email to