When you have multiple partitions specified for hdfs storage, they are used
for block storage in a round robin fashion.
If a partition has insufficient space it is dropped for the set used for
storing new blocks.

On Sun, Sep 13, 2009 at 3:01 AM, Stas Oskin <stas.os...@gmail.com> wrote:

> Hi.
>
> When I specify multiple disks for DFS, does Hadoop distributes the
> concurrent writings over the multiple disks?
>
> I mean, to prevent an utilization of a single disk?
>
> Thanks for any info on subject.
>



-- 
Pro Hadoop, a book to guide you from beginner to hadoop mastery,
http://www.amazon.com/dp/1430219424?tag=jewlerymall
www.prohadoopbook.com a community for Hadoop Professionals

Reply via email to