may be it is not reading the right configuration file.
if possible,make a malformed file,then restart tasktracker where it must
fail to do so.
On Thu, Nov 14, 2013 at 11:53 PM, Vincent Y. Shen
vincent.y.s...@gmail.comwrote:
H i tried but still not working... 4 nodes 8 reducers all
try use job.waitFromComplete(true) instead of job.submit().
it should show more details.
On Mon, Apr 15, 2013 at 6:06 PM, Amit Sela am...@infolinks.com wrote:
Hi all,
I'm trying to submit a mapreduce job remotely using job.submit()
I get the following:
[WARN ]
the version is not match,as the log indicated:
namenode:1.1.1
datanode:1.1.2-SNAPSHOT
hadoop.relaxed.worker.version.check only works when version match(relax
just revision check).
you may have a try of hadoop.skip.worker.version.check.
see https://issues.apache.org/jira/browse/HADOOP-8968
On
when submiting a job,the ToolRunnuer or JobClient just distribute your jars
to hdfs,
so that tasktrackers can launch/re-run it.
In your case,you should have your dynamic class re-generate in
mapper/reducer`s setup method,
or the runtime classloader will miss them all.
On Tue, Nov 13, 2012 at
What about the gc logs?
On Fri, Nov 2, 2012 at 11:16 AM, Harsh J ha...@cloudera.com wrote:
Do you run a TaskTracker on your JobTracker machine?
On Fri, Nov 2, 2012 at 2:17 AM, Patai Sangbutsarakum
silvianhad...@gmail.com wrote:
I have a check monitoring the page
for the timeout problem,you can use a background thread that invoke
context.progress() timely which do keep-alive for forked
Child(mapper/combiner/reducer)...
it is tricky but works.
On Sat, May 5, 2012 at 10:05 PM, Zuhair Khayyat zuhair.khay...@kaust.edu.sa
wrote:
Hi,
I am building a
append your custom codec full class name in io.compression.codecs either
in mapred-site.xml or in the configuration object pass to Job constructor.
the map reduce framework will try to guess the compress algorithm using the
input files suffix.
if any CompressionCodec.getDefaultExtension()
not be possible with the
current implementation?
Or if so, how would I proceed with injecting it with the file name?
--
Greg
W dniu 2012-04-11 10:12, Zizon Qiu pisze:
append your custom codec full class name in io.compression.codecs either
in mapred-site.xml or in the configuration object pass
of
Hadoop?
--
Greg
W dniu 2012-04-11 10:44, Zizon Qiu pisze:
If your are:
1. using TextInputFormat.
2.all input files are ends with certain suffix like .gz
3.the custom CompressionCodec already register in configuration and
getDefaultExtension return the same suffix like as describe
You may grep it from the Context object that passed to
map/reduce/setup/cleanup method.
context.getInputSplit() return a InputSplit object which ,in most of the
case ,is type of FileInputSplit.
if using a not standard FileInputFormat,refer to the getSplit() method of
that certain InputFormat.
if there are only dfs files under /data and /data2,it will be ok when
filled up.
unless some other files like mapreduce teme folder or even a namenode
image,it may broken the cluster when disk was filled up(as namenode can not
do a checkpoint or mapreduce framework can not continue as no disk
It may trigger a IOException and causing the current reduce task on that
node fails,then the jobtracker will try assign that task to the other node.
not quite sure.
On Tue, Jan 10, 2012 at 6:19 PM, aliyeh saeedi a1_sae...@yahoo.com wrote:
Hi
I am going to save files written by reducers, but
You may achieve this by setting a large heartbeat.recheck.interval to
prevent it from being marked as dead,but this is not recommended.
Just take the datanode down,do whatever your want and bring it back.
When successfully heartbeated , the FSNamesystem will recognize the those
missing blocks had
13 matches
Mail list logo