That certainly works, though if you plan to upgrade the underlying library,
you'll find that copying files with the correct versions into
$HADOOP_HOME/lib rapidly gets tedious, and subtle mistakes (e.g., forgetting
one machine) can lead to frustration.
When you consider the fact that you're using
Hello,
I got another solution for this. I just pasted all the required jar files in
lib folder of each hadoop node. In this way the job jar is not too big and
will require less time to distribute in the cluster.
Thanks,
Farhan
On Mon, Apr 13, 2009 at 7:22 PM, Nick Cen wrote:
> create a directr
create a directroy call 'lib' in your project's root dir, then put all the
3rd party jar in it.
2009/4/14 Farhan Husain
> Hello,
>
> I am trying to use Pellet library for some OWL inferencing in my mapper
> class. But I can't find a way to bundle the library jar files in my job jar
> file. I am
Hello,
I am trying to use Pellet library for some OWL inferencing in my mapper
class. But I can't find a way to bundle the library jar files in my job jar
file. I am exporting my project as a jar file from Eclipse IDE. Will it work
if I create the jar manually and include all the jar files Pellet