Thanks Hemanth,
Yes, the java variables passed as -Dkey=value. But for the arguments passed to
the main method (i.e. String[] args) I cannot find any other way to pass them
apart from hadoop jar CLASSNAME arguments. So if I have a job file, I'll will
compulsorily have to use the java
By java environment variables, do you mean the ones passed as
-Dkey=value ? That's one way of passing them. I suppose another way is
to have a client side site configuration (like mapred-site.xml) that
is in the classpath of the client app.
Thanks
Hemanth
On Tue, Sep 25, 2012 at 12:20 AM, Varad
Building on Hemanth answer : at the end your variables should be in the
job.xml (the second file needed with the jar to run a job). Building this
job.xml can be done in various way but it does inherit from your local
configuration and you can change it using the java API but at the end it is
only
You could always write your own properties file and read it as resource.
On Tue, Sep 25, 2012 at 12:10 AM, Hemanth Yamijala yhema...@gmail.comwrote:
By java environment variables, do you mean the ones passed as
-Dkey=value ? That's one way of passing them. I suppose another way is
to have a
Thanks Hemanth,
But in general, if we want to pass arguments to any job (not only
PiEstimator from examples-jar) and submit the Job to the Job queue
scheduler, by the looks of it, we might always need to use the java
environment variables only.
Is my above assumption correct?
Thanks,
Varad
On
Hi,
I want to run the PiEstimator example from using the following command
$hadoop job -submit pieestimatorconf.xml
which contains all the info required by hadoop to run the job. E.g. the
input file location, the output file location and other details.
Varad,
Looking at the code for the PiEstimator class which implements the
'pi' example, the two arguments are mandatory and are used *before*
the job is submitted for execution - i.e on the client side. In
particular, one of them (nSamples) is used not by the MapReduce job,
but by the client code