Hmm... there must be a different way 'cause we don't need to do that to run
Pig jobs.

On Tue, Nov 15, 2011 at 10:58 PM, Daan Gerits <daan.ger...@gmail.com> wrote:

> There might be different ways but currently we are storing our jars onto
> HDFS and register them from there. They will be copied to the machine once
> the job starts. Is that an option?
>
> Daan.
>
> On 16 Nov 2011, at 07:24, Something Something wrote:
>
> > Until now we were manually copying our Jars to all machines in a Hadoop
> > cluster.  This used to work until our cluster size was small.  Now our
> > cluster is getting bigger.  What's the best way to start a Hadoop Job
> that
> > automatically distributes the Jar to all machines in a cluster?
> >
> > I read the doc at:
> > http://hadoop.apache.org/common/docs/current/commands_manual.html#jar
> >
> > Would -libjars do the trick?  But we need to use 'hadoop job' for that,
> > right?  Until now, we were using 'hadoop jar' to start all our jobs.
> >
> > Needless to say, we are getting our feet wet with Hadoop, so appreciate
> > your help with our dumb questions.
> >
> > Thanks.
> >
> > PS:  We use Pig a lot, which automatically does this, so there must be a
> > clean way to do this.
>
>

Reply via email to