[ 
https://issues.apache.org/jira/browse/FLUME-2951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dengkai updated FLUME-2951:
---------------------------
    Attachment:     (was: FLUME-2951.patch)

> Exec Source generate massive logfile in File Channel data dirs
> --------------------------------------------------------------
>
>                 Key: FLUME-2951
>                 URL: https://issues.apache.org/jira/browse/FLUME-2951
>             Project: Flume
>          Issue Type: Bug
>          Components: Sinks+Sources
>    Affects Versions: v1.6.0
>            Reporter: dengkai
>            Assignee: dengkai
>
> When the file channel is full, exec source always get a filechannel 
> exception.Every time the main thread start to roll, and then the 
> timedFlushService flush the events again.So an exception follows :
> 07 Jul 2016 17:09:38,789 ERROR [timedFlushExecService63-0] 
> (org.apache.flume.source.ExecSource$ExecRunnable$1.run:328)  - Exception 
> occured when processing event batch
> org.apache.flume.ChannelException: Commit failed due to IO error 
> [channel=channel-4]
>         at 
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:621)
>         at 
> org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168)
>         at 
> org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194)
>         at 
> org.apache.flume.source.ExecSource$ExecRunnable.flushEventBatch(ExecSource.java:382)
>         at 
> org.apache.flume.source.ExecSource$ExecRunnable.access$100(ExecSource.java:255)
>         at 
> org.apache.flume.source.ExecSource$ExecRunnable$1.run(ExecSource.java:324)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>         at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>         at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>         at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.nio.channels.ClosedByInterruptException
>         at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>         at sun.nio.ch.FileChannelImpl.force(FileChannelImpl.java:380)
>         at 
> org.apache.flume.channel.file.LogFileV3.writeDelimitedTo(LogFileV3.java:148)
>         at 
> org.apache.flume.channel.file.LogFileV3$Writer.<init>(LogFileV3.java:209)
>         at 
> org.apache.flume.channel.file.LogFileFactory.getWriter(LogFileFactory.java:77)
>         at org.apache.flume.channel.file.Log.roll(Log.java:964)
>         at org.apache.flume.channel.file.Log.roll(Log.java:933)
>         at org.apache.flume.channel.file.Log.rollback(Log.java:740)
>         at 
> org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:619)
>         ... 12 more
> When this exceptions happens, file channel start another roll, and the ID of 
> the log file keep increasing.
> If the exec source is configured as restart = true and file channel is still 
> full, this problem always be there and will generate massive log-ID size 
> about 1.0MB and log-ID.meta size about 47bytes in the channel's data dir(in 
> my try, I even got thousands of files and got too many files error when exec 
> source started tail -F command).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to