JobConf.setJar(..) might be the way , but that class is deprecated and no
method in the Job has a corresponding addition.



vishalsant wrote:
> 
> I am a newbie to hadoop so please bear with me if this is naive.
> 
> I have defined a Mapper/Reducer and I desire to run it on a hadoop cluster
> My question is
> 
> * Do I need to specify the Mapper/Reducer in the classpath of all my
> DataNodes/JobTracker Node or can they be  uploaded to the cluster  as
> mobile code. 
> 
> I would like 
> 
> * to define , my job in it's totality on a client JVM ( independent of the
> cluster )
> * compile it 
> * run it's main method , with the conf pointing to the already
> established/running cluster.
> 
> The Mapper/Reducer should be serialized  to the JobTracker JVM , the
> classes deserialized  mapped to Mapper and Reducer and based on the input
> and output arguments map() and reduce() should execute. 
> 
> Is that even possible? 
> 
> Or do I have I have to manually move over to an hadoop installation and
> always execute it through the hadoop exec.
> 
> 

-- 
View this message in context: 
http://old.nabble.com/Can-I-submit-jobs-to-a-hadoop-cluster-without-using-hadoop-executable-tp27280041p27280256.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.

Reply via email to