[ 
https://issues.apache.org/jira/browse/HDFS-16698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16698:
----------------------------------
    Labels: pull-request-available  (was: )

> Add a metric to sense possible MaxDirectoryItemsExceededException in time.
> --------------------------------------------------------------------------
>
>                 Key: HDFS-16698
>                 URL: https://issues.apache.org/jira/browse/HDFS-16698
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: ZanderXu
>            Assignee: ZanderXu
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> In our prod environment, we occasionally encounter 
> MaxDirectoryItemsExceededException caused job failure.
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.FSLimitException$MaxDirectoryItemsExceededException):
>  The directory item limit of /user/XXX/.sparkStaging is exceeded: 
> limit=1048576 items=1048576
> {code}
> In order to avoid it, we add a metric to sense possible 
> MaxDirectoryItemsExceededException in time. So that we can process it in time 
> to avoid job failure.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to