Why are you reading the files with buffered reader in map function.
The problem with your code might be because of the following reason,
The files in "/data/path/to/file/d_20150330-1650" will be locally stored
and will not be accessible to the mappers running on different nodes, and
as in your ma
Thanks for your input but I need to launch my own node manager
(different from the Yarn NM) running on each node.
(which is not explained in the original question)
If I were to launch just a single master with a well-known address,
ZooKeeper would be a great solution!
Thanks.
Dongwon Kim
2015-03
Dear All:
I am unable to start Hadoop even after setting HADOOP_INSTALL,JAVA_HOME and
JAVA_PATH. Please find below error message
anand_vihar@Latitude-E5540:~/hadoop-2.6.0$ start-dfs.sh --config
/home/anand_vihar/hadoop-2.6.0/conf
Starting namenodes on [localhost]
localhost: Error: JAVA_HOME is no
Eating up the IOException in the mapper looks suspicious to me. That can
silently consume the input without any output. Also check in the map sysout
messages for the console print output.
As an aside, since you are not doing anything in the reduce, try setting
number of reduces to 0. That will for
--
Thanks,
*Manikandan Ramakrishnan*
What is the reason of using the queue?
"job.getConfiguration().set("mapred.job.queue.name", "exp_dsa");"
Is your mapper or reducer even been called?
Try adding the override annotation to the map/reduce methods as below:
@Override
public void map(Object key, Text value, Context context) throws
I
I'm not sure why my Mapper and Reducer have no output. The logic behind my
code is, given a file of UUIDs (new line separated), I want to use
`globStatus` to display all the paths to all potential files that the UUID
might be in. Open and read the file. Each file contains 1-n lines of JSON.
The UUI