On Fri, Mar 25, 2011 at 9:06 PM, Rita <rmorgan...@gmail.com> wrote:

> Using 0.21
>
> When I have a filesystem (XFS) with 1TB it detects the datanode detects it
> immediately. When I create 3 identical file systems all 3TB are visible
> immediately.
>
> If I create a 6TB filesystem (XFS) and I add it to dfs.data.dir and I
> restart the datanode, "hdfs dfsadmin -report" does not see the new 6TB
> filesystem.
>
> In all of these occasions the datanode does create a 'finalized' file
> structure in respective directories?
>
>
> My questions are:
> Is there a limitation in the size of dfs.data.dir? What is the largest
> filesystem that can be part of it?
>

I've heard that there is a 4T limit, but I've never tried to replicate.
Given that single disks aren't this large, it indicates you might be running
RAID or SPAN rather than recommended JBOD.


> Could this be a block scanner issue? Is it possible to make my block
> scanning more aggressive?
>

Unrelated to block scanning most likely.

-Todd
-- 
Todd Lipcon
Software Engineer, Cloudera

Reply via email to