, November 30, 2011 3:28 PM
To: common-user@hadoop.apache.org
Subject: Re: HDFS Explained as Comics
Sure, its just a case of how readers interpret it.
1. Client is required to specify block size and replication
factor
each
time
2. Client does not need to worry
30, 2011 3:28 PM
To: common-user@hadoop.apache.org
Subject: Re: HDFS Explained as Comics
Sure, its just a case of how readers interpret it.
1. Client is required to specify block size and replication
factor
each
time
2. Client does not need to worry about
, November 30, 2011 3:28 PM
To: common-user@hadoop.apache.org
Subject: Re: HDFS Explained as Comics
Sure, its just a case of how readers interpret it.
1. Client is required to specify block size and replication
factor
each
time
2. Client does not need to worry
, November 30, 2011 3:28 PM
To: common-user@hadoop.apache.org
Subject: Re: HDFS Explained as Comics
Sure, its just a case of how readers interpret it.
1. Client is required to specify block size and replication
factor
each
time
2. Client does
Hi Maneesh,
Thanks a lot for this! Just distributed it over the team and comments are
great :)
Best regards,
Dejan
On Wed, Nov 30, 2011 at 9:28 PM, maneesh varshney mvarsh...@gmail.comwrote:
For your reading pleasure!
PDF 3.3MB uploaded at (the mailing list has a cap of 1MB attachments):
Thanks Maneesh.
Quick question, does a client really need to know Block size and
replication factor - A lot of times client has no control over these (set
at cluster level)
-Prashant Kommireddi
On Wed, Nov 30, 2011 at 12:51 PM, Dejan Menges dejan.men...@gmail.comwrote:
Hi Maneesh,
Thanks a
Hi Prashant
Others may correct me if I am wrong here..
The client (org.apache.hadoop.hdfs.DFSClient) has a knowledge of block size
and replication factor. In the source code, I see the following in the
DFSClient constructor:
defaultBlockSize = conf.getLong(dfs.block.size,
Sure, its just a case of how readers interpret it.
1. Client is required to specify block size and replication factor each
time
2. Client does not need to worry about it since an admin has set the
properties in default configuration files
A client could not be allowed to override the
30, 2011 3:28 PM
To: common-user@hadoop.apache.org
Subject: Re: HDFS Explained as Comics
Sure, its just a case of how readers interpret it.
1. Client is required to specify block size and replication factor each
time
2. Client does not need to worry about it since an admin has set
: Wednesday, November 30, 2011 3:28 PM
To: common-user@hadoop.apache.org
Subject: Re: HDFS Explained as Comics
Sure, its just a case of how readers interpret it.
1. Client is required to specify block size and replication factor each
time
2. Client does not need to worry about
is
that
conversation happening between client / server and not user / client.
Matt
-Original Message-
From: Prashant Kommireddi [mailto:prash1...@gmail.com]
Sent: Wednesday, November 30, 2011 3:28 PM
To: common-user@hadoop.apache.org
Subject: Re: HDFS Explained as Comics
11 matches
Mail list logo