files from a single DFS client instance should not be a
> problem at all. You get one output stream per file creation, leaving you're
> free to write to a number of them in parallel and to also expect an amount
> of gain.
>
> On 19-Nov-2011, at 11:26 AM, kartheek muthyala wrote:
>
Hi all,
While, I was walking through the code, i found that namenode maintains
block--> machine and nodes-->block tables in memory. I have two doubts with
regards to this.
i. Why is there two tables for the same info? Isn't it redundant? Basically
block --> machine signifies, this block is located
Hi all,
I am interested in knowing, if there is any background daemon in hadoop
which runs for regular periods checking if all the data copies(blocks as
listed in block map) do exist and are not corrupted?. Can you please point
me to that piece of code in hadoop?
Thanks,
Kartheek.
Thanks Uma for the info.
On Thu, Nov 3, 2011 at 4:31 PM, Uma Maheswara Rao G 72686 <
mahesw...@huawei.com> wrote:
> Hello Karthik,
> see inline
> - Original Message -
> From: kartheek muthyala
> Date: Thursday, November 3, 2011 4:02 pm
> Subject: Re: Packets-
care of this? ).
~Kartheek.
On Thu, Nov 3, 2011 at 12:55 PM, Uma Maheswara Rao G 72686 <
mahesw...@huawei.com> wrote:
> - Original Message -
> From: kartheek muthyala
> Date: Thursday, November 3, 2011 11:23 am
> Subject: Packets->Block
> To: common-user@hadoop
Hi all,
I need some info related to the code section which handles the following
operations.
Basically DataXceiver.c on the client side transmits the block in packets
and on the data node side we have DataXceiver.c and BlockReceiver.c files
which take care of writing these packets in order to a b
thanks for your time Uma, I figured out what
I need.
Thanks,
Kartheek.
On Tue, Oct 18, 2011 at 3:39 PM, Uma Maheswara Rao G 72686 <
mahesw...@huawei.com> wrote:
>
> - Original Message -----
> From: kartheek muthyala
> Date: Tuesday, October 18, 2011 1:31 pm
> Subject: Re
wrote:
> - Original Message -
> From: kartheek muthyala
> Date: Tuesday, October 18, 2011 11:54 am
> Subject: Re: Does hadoop support append option?
> To: common-user@hadoop.apache.org
>
> > I am just concerned about the use case of appends in Hadoop. I
> > know tha
I am just concerned about the use case of appends in Hadoop. I know that
they have provided support for appends in hadoop. But how frequently are the
files getting appended? . There is this version concept too that is
maintained in the block report, according to my guess this version number is
main
lose the socket which were used for witing the block.
> > Streamer thread repeat the loops. When it find there is no sockets open
> > then it will again create the pipeline for the next block.
> > Go throgh the flow from writeChunk in DFSOutputStream.java, where
> exactly
> >
Kartheek.
On Sat, Sep 17, 2011 at 12:09 PM, Arun C Murthy wrote:
>
> On Sep 16, 2011, at 11:26 PM, kartheek muthyala wrote:
>
> > Any updates!!
>
> A bit of patience will help. It also helps to do some homework and ask
> specific questions.
>
> I don't know if yo
Any updates!!
-- Forwarded message --
From: kartheek muthyala
Date: Fri, Sep 16, 2011 at 8:38 PM
Subject: Job Scheduler, Task Scheduler and Fair Scheduler
To: common-user@hadoop.apache.org
Hi all,
Can any one explain me the responsibilities of each scheduler?. I am
interested
Hi all,
Can any one explain me the responsibilities of each scheduler?. I am
interested in the flow of commands that goes between these scheduler. And if
any one have any info regarding how the job scheduler schedules a job based
on the data locality?. As of I know, there is some heartbeat mechanis
13 matches
Mail list logo