[
https://issues.apache.org/jira/browse/HDFS-1112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12869432#action_12869432
]
Konstantin Shvachko commented on HDFS-1112:
-------------------------------------------
Hairong, the patch looks good. Two more things
# You should not remove the large try {} finally section. This prevents from
other runtime exception blocking syncing by not resetting isSyncRunning flag.
This was discussed in HDFS-119.
# With respect to my previous comment 2. I understand it is impossible to join
the two variables. But do we still need to add the condition on
{{isAutoSyncScheduled}} in {{waitForSyncToFinish()}}?
{code}
- while (isSyncRunning) {
+ while (isSyncRunning && isAutoSyncScheduled) {
{code}
> Edit log buffer should not grow unboundedly
> -------------------------------------------
>
> Key: HDFS-1112
> URL: https://issues.apache.org/jira/browse/HDFS-1112
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: name-node
> Affects Versions: 0.22.0
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Fix For: 0.22.0
>
> Attachments: editLogBuf.patch, editLogBuf1.patch, editLogBuf2.patch
>
>
> Currently HDFS does not impose an upper limit on the edit log buffer. In case
> there are a large number of open operations coming in with access time update
> on, since open does not call sync automatically, there is a possibility that
> the buffer grow to a large size, therefore causes memory leak and full GC in
> extreme cases as described in HDFS-1104.
> The edit log buffer should be automatically flushed when the buffer becomes
> full.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.