[
https://issues.apache.org/jira/browse/HADOOP-1521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12507968
]
Enis Soztutar commented on HADOOP-1521:
---------------------------------------
>think about how do you explain it
The javadoc of JobConf explains this as :
{code}
/**
* Construct a map/reduce job configuration.
* @param exampleClass a class whose containing jar is used as the job's jar.
*/
public JobConf(Class exampleClass) {
initialize();
setJarByClass(exampleClass);
}
{code}
>The right solution is that the user should be able to specify any jar(s) and
>Hadoop should ship the jar(s) and put them on the class path in the
>executing environment.
We could set a system property from {{JobRunner}} to the jar file argument, and
then initialize {{JobConf}} s with with this jar from the empty constructor.
However i am not sure if this is what we want. Are there any other votes for
this issue?
> Hadoop mapreduce should always ship the jar file(s) specified by the user
> --------------------------------------------------------------------------
>
> Key: HADOOP-1521
> URL: https://issues.apache.org/jira/browse/HADOOP-1521
> Project: Hadoop
> Issue Type: Bug
> Components: mapred
> Reporter: Runping Qi
> Assignee: Enis Soztutar
> Attachments: valueAggregator_v1.0.patch
>
>
> when I run a hadoop job like:
> bin/hadoop jar myjar
> org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob other_args
> myjar is not shipped. The job failed because the class loader cannot find the
> classes specified in myjar.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.