datanodes are placed into after 30 seconds of missing heartbeats. (This is
> an optional feature controlled by dfs.namenode.check.stale.datanode )
>
> best,
> Colin
>
>
> On Tue, Mar 12, 2013 at 5:29 PM, André Oriani wrote:
>
> > No take on this one?
> >
> &g
decisions and for
ensuring the replication levels. Would that then be reason why heartbeats
are so frequent? A lot can happen to a DataNode in just three seconds?
Thanks,
André Oriani
On Thu, Mar 7, 2013 at 10:37 PM, André Oriani wrote:
> Hi,
>
> Is there any particular reason why th
Hi,
Is there any particular reason why the default heartbeat interval is 3
seconds and the timeout is 10 minutes? Everywhere I looked (code, Google,
..) only mentions the values but no clue on why those values were chosen.
Thanks in advance,
André Oriani
Hi Ivan.
I filled https://issues.apache.org/jira/browse/HDFS-2090
Regards,
André
On Mon, Jun 20, 2011 at 05:39, Ivan Kelly wrote:
> ommit 27b956fa62ce9b467ab7dd287dd6dc**d5ab6a0cb3
> Author: Hairong Kuang
> Date: Mon Apr 11 17:15:27 2011 +
>
> HDFS-1630. Support fsedits checksum. Con
: Mon Apr 11 17:15:27 2011 +
HDFS-1630. Support fsedits checksum. Contrbuted by Hairong Kuang.
git-svn-id:
https://svn.apache.org/repos/asf/hadoop/hdfs/trunk@109113113f79535-47bb-0310-9956-ffa450edef68
Regards,
André Oriani
On Thu, Jun 16, 2011 at 07:31, Ivan Kelly wrote:
> T
Personally, I would expect to run from source, because sometimes is good to
test running the actual thing.
Regards,
André Oriani
On Thu, Jun 16, 2011 at 21:15, Kirk True wrote:
> Should running ./bin/hdfs from the source root work?
>
> I get these errors:
>
>[kirk@bubbas
(FSEditLogLoader.java:490)
... 13 more
Thanks and Regards,
André Oriani
enode ?
Thanks for your time and regards,
André Oriani
at make this easier:
>
> https://github.com/elicollins/hadoop-dev
>
> Thanks,
> Eli
>
> On Tue, Jun 7, 2011 at 11:56 AM, André Oriani
> wrote:
> > Hi,
> >
> >
> > I have clone the repo for hadoop-common and hadoop-hdfs and built it
> using
> > &
Hi,
I have clone the repo for hadoop-common and hadoop-hdfs and built it using
"ant mvn-install" . Now I would like to be able run HDFS in
pseudo-distributed mode to test some modifications of mine. One year ago I
could do it but now I had no luck. The scripts are failing, complaining
about not
quot; . Am I missing any thread?
Thanks,
André Oriani
eived messages to both
avatar namenodes. Does both avatar namenodes send DataCommands back ? If so,
how contradictory commands are avoided? What would be the impact of
forwarding messages to a third, fourth,... namenode ?
Thanks and Regards,
André Oriani
Hi,
I am studying how block reports are processed, but I am not sure if I
understood how BlockInfo::triplets are used by DatanodeDescriptors and
BlocksMap.
That's what I understood:
For each Block, triplets[i] with i%3==0, gives the datanodes that are
storing the block. New datanodes are inserte
Hi Folks,
I have sent a patch to HDFS-1031 and it has passed all the hudson tests.
Could someone, please, review the patch and send comments? I would be
very grateful.
Thanks a lot,
--
André Oriani
MSc Candidate at Computer Science Institute-Unicamp
BRAZIL
> André Oriani wrote:
>> Hi all,
>>
>> I am working in a patch for HDFS-1031, which will add a new page fo the
>> WebUI. I talked to Todd Lipcon at freenode an he said there is no UT for
>> the web ui. Actually there is one --
>> src/test/hdfs/org/apache/
Hi all,
I am working in a patch for HDFS-1031, which will add a new page fo the
WebUI. I talked to Todd Lipcon at freenode an he said there is no UT for
the web ui. Actually there is one --
src/test/hdfs/org/apache/hadoop/hdfs/TestMissingBlocksAlert.java -- but it
is kind of a mixed test rather th
16 matches
Mail list logo