Thanks, I'll check it out.
On Tue, 27 Jun 2017 at 10:22 PM, Jasson Chenwei
wrote:
> hi, Hassan
>
> Actually, I didn't find any tryout for implementing
> checkpoint& based fault tolerance in the community yet.
> I think the reason is the overhead is much larger than the
hi, Hassan
Actually, I didn't find any tryout for implementing checkpoint&
based fault tolerance in the community yet.
I think the reason is the overhead is much larger than the gain, given the
fact that each map task only runs for 30s~40s. However, I have ever read
some academic papers that
Hi Omprakash!
This is *not* ok. Please go through the datanode logs of the inactive
datanode and figure out why its inactive. If you set dfs.replication to 2,
atleast as many datanodes (and ideally a LOT more datanodes) should be
active and participating in the cluster.
Do you have the
Thanks Ravi
Ravi Prakash 于2017年6月27日周二 上午3:49写道:
> Hi Jeff!
>
> Yes. hadoop-2.6 clients are able to read files on a hadoop-2.7 cluster.
> The document I could find is
> http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html
> .
>
> "Both
It doesn't work like that - see https://hadoop.apache.org/mailing_lists.html
for instructions how to unsubscribe
On Tue, Jun 27, 2017 at 8:25 AM, Qiming Chen wrote:
> Unsubscribe
>
Unsubscribe
Hi all,
I started the HDFS in DEBUG mode. After examining the logs I found below logs
which read that the replication factor required is 3 (as against the specified
dfs.replication=2).
DEBUG BlockStateChange: BLOCK* NameSystem.UnderReplicationBlock.add:
blk_1074074104_337394 has only 1