[
https://issues.apache.org/jira/browse/HDFS-1112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12868551#action_12868551
]
Todd Lipcon commented on HDFS-1112:
-----------------------------------
Some comments:
- timeToSync() method name implies that it's a time based condition, perhaps
better to rename to something like isBufferFull() since that's the policy
currently implemented. Or more generically shouldForceSync()
- why the notifyAll calls inside logSync, since that's done already inside
doneWithAutoSyncScheduling?
- the changes to add IOException thrown seems messy - isn't our contract always
that failing to append to the edit log is a fatal error for the NN and thus a
checked exception won't be thrown?
- the patch doesn't apply to trunk anymore
Aside from that, think this looks good. I edited TestFSEditLogRace to run for a
couple minutes with 16 threads making edits simultaneously, and no deadlocks or
anything.
> Edit log buffer should not grow unboundedly
> -------------------------------------------
>
> Key: HDFS-1112
> URL: https://issues.apache.org/jira/browse/HDFS-1112
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: name-node
> Affects Versions: 0.22.0
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Fix For: 0.22.0
>
> Attachments: editLogBuf.patch
>
>
> Currently HDFS does not impose an upper limit on the edit log buffer. In case
> there are a large number of open operations coming in with access time update
> on, since open does not call sync automatically, there is a possibility that
> the buffer grow to a large size, therefore causes memory leak and full GC in
> extreme cases as described in HDFS-1104.
> The edit log buffer should be automatically flushed when the buffer becomes
> full.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.