I am having some troubles in understanding how the whole stuff works..

Compiling with ant works ok and I am able to compile a jar which is
afterwards deployed to the cluster. On the cluster I've set the
HADOOP_CLASSPATH variable to point just to jar files in the lib folder
($HD_HOME/lib/*.jar), where I put the new compiled
hadoop-core-myversion.jar.

Before deploying I guarantee that in the $HD_HOME folder and $HD_HOME/lib
there are no previous version of hadoop-core-xxx.jar or core-3.3.1.jar .
The problem is that I suspect that hadoop is picking the wrong hadoop-core
jars so I am interested how the whole mechanism works and what is the
purpose of the $HD_HOME/share/hadoop folder where I can locate other
hadoop-core jars and which is included in the classpath in hadoop-env.sh?


My last question is what is the easiest way to see that your build is up
and running?  Maybe from the release tag in JT?

Thanks you..

Reply via email to