Rita, another issue I've seen is that when you have lots of XFS filesystems
that are heavily used, the Linux kernel will at some point crash. So the XFS
driver seems to have problems that only appear with large volumes of data. I
will switch to EXT4 soon because of this.

2011/3/29 Rita <rmorgan...@gmail.com>

> Thanks with ext4 i created 2 16TB volumes and they are seen. I think it
> maybe a issue with XFS.
>
>
>
> On Mon, Mar 28, 2011 at 3:50 PM, Todd Lipcon <t...@cloudera.com> wrote:
>
>> On Fri, Mar 25, 2011 at 9:06 PM, Rita <rmorgan...@gmail.com> wrote:
>>
>>> Using 0.21
>>>
>>> When I have a filesystem (XFS) with 1TB it detects the
>>> datanode detects it immediately. When I create 3 identical file systems all
>>> 3TB are visible immediately.
>>>
>>> If I create a 6TB filesystem (XFS) and I add it to dfs.data.dir and I
>>> restart the datanode, "hdfs dfsadmin -report" does not see the new 6TB
>>> filesystem.
>>>
>>> In all of these occasions the datanode does create a 'finalized' file
>>> structure in respective directories?
>>>
>>>
>>> My questions are:
>>> Is there a limitation in the size of dfs.data.dir? What is the largest
>>> filesystem that can be part of it?
>>>
>>
>> I've heard that there is a 4T limit, but I've never tried to replicate.
>> Given that single disks aren't this large, it indicates you might be running
>> RAID or SPAN rather than recommended JBOD.
>>
>>
>>> Could this be a block scanner issue? Is it possible to make my block
>>> scanning more aggressive?
>>>
>>
>> Unrelated to block scanning most likely.
>>
>> -Todd
>>  --
>> Todd Lipcon
>> Software Engineer, Cloudera
>>
>
>
>
> --
> --- Get your facts first, then you can distort them as you please.--
>

Reply via email to