>
> > > > Or it's DFS only?
> > > >
> > > > Regards.
> > > >
> > > > 2009/9/14 Jason Venner
> > > >
> > > > > When you have multiple partitions specified for hdfs storage, they
> > are
> >
I think that the question confused people. This is open-source, after all,
so you can use the code any way you like. It sounds like you are asking
permission to use the code which you, of course, already have.
On Tue, Sep 15, 2009 at 4:51 AM, Stas Oskin wrote:
> ... I actually asked, if I can
storage in a round robin fashion.
> > > > If a partition has insufficient space it is dropped for the set used
> > for
> > > > storing new blocks.
> > > >
> > > > On Sun, Sep 13, 2009 at 3:01 AM, Stas Oskin
> > > wrote:
> > >
gt; > > When you have multiple partitions specified for hdfs storage, they are
> > used
> > > for block storage in a round robin fashion.
> > > If a partition has insufficient space it is dropped for the set used
> for
> > > storing new blocks.
> >
ons specified for hdfs storage, they are
> used
> > for block storage in a round robin fashion.
> > If a partition has insufficient space it is dropped for the set used for
> > storing new blocks.
> >
> > On Sun, Sep 13, 2009 at 3:01 AM, Stas Oskin
> wrote:
>
ion.
> If a partition has insufficient space it is dropped for the set used for
> storing new blocks.
>
> On Sun, Sep 13, 2009 at 3:01 AM, Stas Oskin wrote:
>
> > Hi.
> >
> > When I specify multiple disks for DFS, does Hadoop distributes the
> > concurrent wr
I specify multiple disks for DFS, does Hadoop distributes the
> concurrent writings over the multiple disks?
>
> I mean, to prevent an utilization of a single disk?
>
> Thanks for any info on subject.
>
--
Pro Hadoop, a book to guide you from beginner to hadoop mastery,
http://www.a
Hi.
When I specify multiple disks for DFS, does Hadoop distributes the
concurrent writings over the multiple disks?
I mean, to prevent an utilization of a single disk?
Thanks for any info on subject.