[ 
https://issues.apache.org/jira/browse/HDFS-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eli Collins updated HDFS-3061:
------------------------------

    Summary: Backport HDFS-1487 to branch-1  (was: Cached directory size in 
INodeDirectory can get permantently out of sync with computed size, causing 
quota issues)

Updating summary to reflect that this patch is the HDFS-1487 patch applied to 
branch-1.
                
> Backport HDFS-1487 to branch-1
> ------------------------------
>
>                 Key: HDFS-3061
>                 URL: https://issues.apache.org/jira/browse/HDFS-3061
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: name-node
>    Affects Versions: 0.20.203.0, 0.23.0, 1.0.2
>         Environment: 0.20.203 with HDFS-1377 and HDFS-2053 patches applied
>            Reporter: Alex Holmes
>            Assignee: Kihwal Lee
>            Priority: Blocker
>             Fix For: 1.0.3
>
>         Attachments: QuotaTestSimple.java, hdfs-3061-branch-1.patch
>
>
> It appears that there's a condition under which a HDFS directory with a space 
> quota set can get to a point where the cached size for the directory can 
> permanently differ from the computed value.  When this happens the following 
> command:
> {code}
> hadoop fs -count -q /tmp/quota-test
> {code}
> results in the following output in the NameNode logs:
> {code}
> WARN org.apache.hadoop.hdfs.server.namenode.NameNode: Inconsistent diskspace 
> for directory quota-test. Cached: 6000 Computed: 6072
> {code}
> I've observed both transient and persistent instances of this happening.  In 
> the transient instances this warning goes away, but in the persistent 
> instances every invocation of the {{fs -count -q}} command yields the above 
> warning.
> I've seen instances where the actual disk usage of a directory is 25% of the 
> cached value in INodeDirectory, which creates problems since the quota code 
> uses this cached value to determine whether block write requests are 
> permitted.
> This isn't easy to reproduce - I am able to (inconsistently) get HDFS into 
> this state with a simple program which:
> #  Writes files into HDFS
> #  When a DSQuotaExceededException is encountered removes all files created 
> in step 1
> #  Repeat step 1
> I'm going to try and come up with a more repeatable test case to reproduce 
> this issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to