Unsubscribe
Hi all,
I started the HDFS in DEBUG mode. After examining the logs I found below logs
which read that the replication factor required is 3 (as against the specified
dfs.replication=2).
DEBUG BlockStateChange: BLOCK* NameSystem.UnderReplicationBlock.add:
blk_1074074104_337394 has only 1
Unsubscribe
Unsubscribe
Regards,
Saidi
The information contained in this message is proprietary and/or confidential.
If you are not the intended recipient, please: (i) delete the message and all
copies; (ii) do not disclose, distribute or use the message in any manner; and
(iii) notify the sender immedia
Thank you for the clarification. i am talking about fault toldrance in map
reduce, is there any algorithm implement in it?? For Checkpointing and
Replication???
On Mon, 26 Jun 2017 at 3:32 AM, Jasson Chenwei
wrote:
> hi, Hassan.
>
> First, YARN( the scheduler) doesn't provide any fault tolerance
Hi Jeff!
Yes. hadoop-2.6 clients are able to read files on a hadoop-2.7 cluster. The
document I could find is
http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html
.
"Both Client-Server and Server-Server compatibility is preserved within a
major release"
HTH
R
Hi wuchang,
If you are using Hadoop 2.7+,
you can use the following parameters to limit the number of
simultaneously running map/reduce tasks per MapReduce application:
* mapreduce.job.running.map.limit (default: 0, for no limit)
* mapreduce.job.running.reduce.limit (default: 0, for no limit)
R
unsubscribe
Krishna Venkatrama(KK)
9049052189
IT Shared Services
Sr. Information Architect
Hadoop Data Platform Architect
We comply with applicable Federal civil rights laws and do not discriminate on
the basis of race, color, national origin, age, disability or sex. You may
access the Non-Disc
It looks like it can. But is there any document about the compatibility
between versions ? Thanks
unsubscribe
10 matches
Mail list logo