[ 
https://issues.apache.org/jira/browse/HDFS-16698?focusedWorklogId=795695&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-795695
 ]

ASF GitHub Bot logged work on HDFS-16698:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 27/Jul/22 14:54
            Start Date: 27/Jul/22 14:54
    Worklog Time Spent: 10m 
      Work Description: ZanderXu opened a new pull request, #4644:
URL: https://github.com/apache/hadoop/pull/4644

   ### Description of PR
   In our prod environment, we occasionally encounter 
MaxDirectoryItemsExceededException caused job failure.
   ```
   
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
 The directory item limit of /user/XXX/.sparkStaging is exceeded: limit=1048576 
items=1048576
   ```
   
   In order to avoid it, we add a metric to sense possible 
MaxDirectoryItemsExceededException in time. So that we can process it in time 
to avoid job failure.
   




Issue Time Tracking
-------------------

            Worklog Id:     (was: 795695)
    Remaining Estimate: 0h
            Time Spent: 10m

> Add a metric to sense possible MaxDirectoryItemsExceededException in time.
> --------------------------------------------------------------------------
>
>                 Key: HDFS-16698
>                 URL: https://issues.apache.org/jira/browse/HDFS-16698
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: ZanderXu
>            Assignee: ZanderXu
>            Priority: Major
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> In our prod environment, we occasionally encounter 
> MaxDirectoryItemsExceededException caused job failure.
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
>  The directory item limit of /user/XXX/.sparkStaging is exceeded: 
> limit=1048576 items=1048576
> {code}
> In order to avoid it, we add a metric to sense possible 
> MaxDirectoryItemsExceededException in time. So that we can process it in time 
> to avoid job failure.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to