All splits sent for processing in a Job carry a list of locations
where their blocks reside -- this plus the network hierarchy details
held by the JT is used to determine the locality level.
Have a look at JobInProgress.getLocalityLevel(), which takes a given
TaskInProgress object, and a TaskTrack
Hi,
I would like to understand how the JT choose a TT to launch a map task
in a clustering setup. So I pose 4 different questions, to understand
how the choice of TT is made by the JT. All these questions are posed
in a scenario where the Hadoop MR is installed in clustering mode.
1 - For exampl
Hi,
I've hadoop installed in a cluster and I would like that JT could
guess in the network topology what are the input files in HDFS that
are closer to him, and further.
So, how can a JT know if an input file is located on local-level, on
rack-level, or on the other level?
Thanks,
--
Pedro
Arun,
That explains it. I am running 0.20.2 in our production right now.
I guess I will wait for it to be finalized.
Thanks,
Felix
On Wed, Jan 12, 2011 at 6:48 PM, Arun C Murthy wrote:
> Felix,
>
> Which version of the CS i.e. hadoop are you looking at?
>
> As I mentioned, you'll need to
The second issue is cause your Mapper code is unable to find the solr
and other user classes (It does not have a JAR on its classpath to
load one from). You can avoid this by making sure that (either):
1. Your submission code is doing a Job.setJarByClass(...) or
Job.setJar(...) --> This will send
Install Cygwin from http://cygwin.org (Defaults should do, but you may
customize) and add it's bin/ to your Windows PATH. This should give
you "tr" and other used utils. Hadoop on Windows is right now only
supported with Cygwin AFAIK.
On Thu, Jan 13, 2011 at 6:03 PM, Joan wrote:
> Can someone tel
Can someone tell me how to compile hadoop project?
I downloaded trunk from svn and I try to follow the instructions
http://wiki.apache.org/hadoop/EclipseEnvironment but I'm using Eclipse on
Windows platform so When i'm try to run build.xml I get next error:
Buildfile: C:\workspace\hadoop-trunk\bu
Hi,
I'm trying build solr index with MapReduce (Hadoop) and I'm using
https://issues.apache.org/jira/browse/SOLR-1301 but I've a problem with
hadoop version and this patch.
When I compile this patch, I use 0.21.0 hadoop version, I don't have any
problem but when I'm trying to run my job in Hadoop