[ 
https://issues.apache.org/jira/browse/FLUME-2245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13999068#comment-13999068
 ] 

Hudson commented on FLUME-2245:
-------------------------------

SUCCESS: Integrated in flume-trunk #635 (See 
[https://builds.apache.org/job/flume-trunk/635/])
FLUME-2245. Pre-close flush failure can cause HDFS Sinks to not process events. 
(hshreedharan: 
http://git-wip-us.apache.org/repos/asf/flume/repo?p=flume.git&a=commit&h=33cdcf0d4e85e68e6df9e1ca4be729889d480246)
* 
flume-ng-sinks/flume-hdfs-sink/src/main/java/org/apache/flume/sink/hdfs/BucketWriter.java


> HDFS files with errors unable to close
> --------------------------------------
>
>                 Key: FLUME-2245
>                 URL: https://issues.apache.org/jira/browse/FLUME-2245
>             Project: Flume
>          Issue Type: Bug
>            Reporter: Juhani Connolly
>            Assignee: Brock Noland
>         Attachments: FLUME-2245.patch, flume.log.1133, flume.log.file
>
>
> This  is running on a snapshot of Flume-1.5 with the git hash 
> 99db32ccd163daf9d7685f0e8485941701e1133d
> When a datanode goes unresponsive for a significant amount of time(for 
> example a big gc) an append failure will occur followed by repeated time outs 
> appearing in the log, and failure to close the stream. Relevant section of 
> logs attached(where it first starts appearing.
> The same log repeats periodically, consistently running into a 
> TimeoutException.
> Restarting  flume(or presumably just the HDFSSink) solves the issue.
> Probable cause in comments



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to