Thanks Kyle, I tried libjars option but it didn't work. I tried on 0.18
version.
But I guess I had not set the classpath. So I will try again.

Anyways, putting jars in distributed cache solved my problem. But
-libjars options seems lot more useful and easy to use :)

thanks,
Taran

On Fri, Oct 17, 2008 at 10:09 AM, Kyle Lau <[EMAIL PROTECTED]> wrote:

> If I understand your problem correctly, one solution that worked for me
> is to use the -libjars flag when launching your hadoop job:
>
> bin/hadoop jar -libjars <comma separated jars> yourMainClass.jar
> <args>...
>
> I used this solution on my 5-slave cluster.  I needed to have the third
> party jar files to become available to all nodes without me manually
> distributing them from the master node where I launch my job.
>
> Kyle
>
>
>
>
> On Mon, 2008-10-13 at 12:11 -0700, Allen Wittenauer wrote:
> > On 10/13/08 11:06 AM, "Tarandeep Singh" <[EMAIL PROTECTED]> wrote:
> > > I want to push third party jar files that are required to execute my
> job, on
> > > slave machines. What is the best way to do this?
> >
> >     Use a DistributedCache as part of your job submission.
> >
>

Reply via email to