[jira] [Commented] (FLUME-2307) Remove Log writetimeout
[ https://issues.apache.org/jira/browse/FLUME-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897130#comment-13897130 ] Hudson commented on FLUME-2307: --- SUCCESS: Integrated in flume-trunk #549 (See [https://builds.apache.org/job/flume-trunk/549/]) FLUME-2307. Remove Log writetimeout (jarcec: http://git-wip-us.apache.org/repos/asf/flume/repo?p=flume.git&a=commit&h=b4ddd5829897f758f869a5fc3b08dcbf4b55156a) * flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/FileChannel.java * flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/Log.java * flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/FileChannelConfiguration.java * flume-ng-doc/sphinx/FlumeUserGuide.rst > Remove Log writetimeout > --- > > Key: FLUME-2307 > URL: https://issues.apache.org/jira/browse/FLUME-2307 > Project: Flume > Issue Type: Bug > Components: Channel >Affects Versions: v1.4.0 >Reporter: Steve Zesch >Assignee: Hari Shreedharan > Fix For: v1.5.0 > > Attachments: FLUME-2307-1.patch, FLUME-2307.patch > > > I've observed Flume failing to clean up old log data in FileChannels. The > amount of old log data can range anywhere from tens to hundreds of GB. I was > able to confirm that the channels were in fact empty. This behavior always > occurs after lock timeouts when attempting to put, take, rollback, or commit > to a FileChannel. Once the timeout occurs, Flume stops cleaning up the old > files. I was able to confirm that the Log's writeCheckpoint method was still > being called and successfully obtaining a lock from tryLockExclusive(), but I > was not able to confirm removeOldLogs being called. The application log did > not include "Removing old file: log-xyz" for the old files which the Log > class would output if they were correctly being removed. I suspect the lock > timeouts were due to high I/O load at the time. > Some stack traces: > {code} > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:478) > at > org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93) > at > org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:80) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:189) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doCommit(FileChannel.java:594) > at > org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151) > at > dataxu.flume.plugins.avro.AsyncAvroSink.process(AsyncAvroSink.java:548) > at > dataxu.flume.plugins.ClassLoaderFlumeSink.process(ClassLoaderFlumeSink.java:33) > at > org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68) > at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) > at java.lang.Thread.run(Thread.java:619) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:621) > at > org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194) > at > dataxu.flume.plugins.avro.AvroSource.appendBatch(AvroSource.java:209) > at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:91) > at org.apache.avro.ipc.Responder.respond(Responder.java:151) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:188) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) > at > org.jboss.netty.channel.DefaultChannelPipeline
[jira] [Commented] (FLUME-2225) Elasticsearch Sink for ES HTTP API
[ https://issues.apache.org/jira/browse/FLUME-2225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897068#comment-13897068 ] Pawel commented on FLUME-2225: -- I see. So do you have any suggestion what to do in that case (Jest library uses guava 14) ? > Elasticsearch Sink for ES HTTP API > -- > > Key: FLUME-2225 > URL: https://issues.apache.org/jira/browse/FLUME-2225 > Project: Flume > Issue Type: New Feature >Affects Versions: v1.5.0 >Reporter: Otis Gospodnetic > Fix For: v1.4.1, v1.5.0 > > Attachments: FLUME-2225-0.patch, FLUME-2225-1.patch > > > Existing ElasticSearchSink uses ES TransportClient. As such, one cannot use > the ES HTTP API, which is sometimes easier, and doesn't have issues around > client and server/cluster components using incompatible versions - currently, > both client and server/cluster need to be on the same version. > See > http://search-hadoop.com/m/k76HH9Te68/otis&subj=Elasticsearch+sink+that+uses+HTTP+API -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (FLUME-2225) Elasticsearch Sink for ES HTTP API
[ https://issues.apache.org/jira/browse/FLUME-2225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897061#comment-13897061 ] Hari Shreedharan commented on FLUME-2225: - [~prog] - I am wary of bumping Guava from 11 to 14 since Guava breaks compatibility between releases. Hadoop currently depends on Guava 11.x and we should stay on that version to make sure we don't end up breaking some of the current components > Elasticsearch Sink for ES HTTP API > -- > > Key: FLUME-2225 > URL: https://issues.apache.org/jira/browse/FLUME-2225 > Project: Flume > Issue Type: New Feature >Affects Versions: v1.5.0 >Reporter: Otis Gospodnetic > Fix For: v1.4.1, v1.5.0 > > Attachments: FLUME-2225-0.patch, FLUME-2225-1.patch > > > Existing ElasticSearchSink uses ES TransportClient. As such, one cannot use > the ES HTTP API, which is sometimes easier, and doesn't have issues around > client and server/cluster components using incompatible versions - currently, > both client and server/cluster need to be on the same version. > See > http://search-hadoop.com/m/k76HH9Te68/otis&subj=Elasticsearch+sink+that+uses+HTTP+API -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (FLUME-2225) Elasticsearch Sink for ES HTTP API
[ https://issues.apache.org/jira/browse/FLUME-2225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897053#comment-13897053 ] Otis Gospodnetic commented on FLUME-2225: - Looks like [~ejsarge] is happy, judging from comments in the review board. How do we go from patch + review to commit? > Elasticsearch Sink for ES HTTP API > -- > > Key: FLUME-2225 > URL: https://issues.apache.org/jira/browse/FLUME-2225 > Project: Flume > Issue Type: New Feature >Affects Versions: v1.5.0 >Reporter: Otis Gospodnetic > Fix For: v1.4.1, v1.5.0 > > Attachments: FLUME-2225-0.patch, FLUME-2225-1.patch > > > Existing ElasticSearchSink uses ES TransportClient. As such, one cannot use > the ES HTTP API, which is sometimes easier, and doesn't have issues around > client and server/cluster components using incompatible versions - currently, > both client and server/cluster need to be on the same version. > See > http://search-hadoop.com/m/k76HH9Te68/otis&subj=Elasticsearch+sink+that+uses+HTTP+API -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (FLUME-2307) Remove Log writetimeout
[ https://issues.apache.org/jira/browse/FLUME-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897035#comment-13897035 ] ASF subversion and git services commented on FLUME-2307: Commit 0cb6ce69140f137af947de4e0828ff73a623f042 in branch refs/heads/flume-1.5 from [~jarcec] [ https://git-wip-us.apache.org/repos/asf?p=flume.git;h=0cb6ce6 ] FLUME-2307. Remove Log writetimeout (Hari Shreedharan via Jarek Jarcec Cecho) > Remove Log writetimeout > --- > > Key: FLUME-2307 > URL: https://issues.apache.org/jira/browse/FLUME-2307 > Project: Flume > Issue Type: Bug > Components: Channel >Affects Versions: v1.4.0 >Reporter: Steve Zesch >Assignee: Hari Shreedharan > Fix For: v1.5.0 > > Attachments: FLUME-2307-1.patch, FLUME-2307.patch > > > I've observed Flume failing to clean up old log data in FileChannels. The > amount of old log data can range anywhere from tens to hundreds of GB. I was > able to confirm that the channels were in fact empty. This behavior always > occurs after lock timeouts when attempting to put, take, rollback, or commit > to a FileChannel. Once the timeout occurs, Flume stops cleaning up the old > files. I was able to confirm that the Log's writeCheckpoint method was still > being called and successfully obtaining a lock from tryLockExclusive(), but I > was not able to confirm removeOldLogs being called. The application log did > not include "Removing old file: log-xyz" for the old files which the Log > class would output if they were correctly being removed. I suspect the lock > timeouts were due to high I/O load at the time. > Some stack traces: > {code} > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:478) > at > org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93) > at > org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:80) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:189) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doCommit(FileChannel.java:594) > at > org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151) > at > dataxu.flume.plugins.avro.AsyncAvroSink.process(AsyncAvroSink.java:548) > at > dataxu.flume.plugins.ClassLoaderFlumeSink.process(ClassLoaderFlumeSink.java:33) > at > org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68) > at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) > at java.lang.Thread.run(Thread.java:619) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:621) > at > org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194) > at > dataxu.flume.plugins.avro.AvroSource.appendBatch(AvroSource.java:209) > at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:91) > at org.apache.avro.ipc.Responder.respond(Responder.java:151) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:188) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:321) > at > org.jboss.netty.ha
[jira] [Commented] (FLUME-2307) Remove Log writetimeout
[ https://issues.apache.org/jira/browse/FLUME-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897034#comment-13897034 ] ASF subversion and git services commented on FLUME-2307: Commit b4ddd5829897f758f869a5fc3b08dcbf4b55156a in branch refs/heads/trunk from [~jarcec] [ https://git-wip-us.apache.org/repos/asf?p=flume.git;h=b4ddd58 ] FLUME-2307. Remove Log writetimeout (Hari Shreedharan via Jarek Jarcec Cecho) > Remove Log writetimeout > --- > > Key: FLUME-2307 > URL: https://issues.apache.org/jira/browse/FLUME-2307 > Project: Flume > Issue Type: Bug > Components: Channel >Affects Versions: v1.4.0 >Reporter: Steve Zesch >Assignee: Hari Shreedharan > Attachments: FLUME-2307-1.patch, FLUME-2307.patch > > > I've observed Flume failing to clean up old log data in FileChannels. The > amount of old log data can range anywhere from tens to hundreds of GB. I was > able to confirm that the channels were in fact empty. This behavior always > occurs after lock timeouts when attempting to put, take, rollback, or commit > to a FileChannel. Once the timeout occurs, Flume stops cleaning up the old > files. I was able to confirm that the Log's writeCheckpoint method was still > being called and successfully obtaining a lock from tryLockExclusive(), but I > was not able to confirm removeOldLogs being called. The application log did > not include "Removing old file: log-xyz" for the old files which the Log > class would output if they were correctly being removed. I suspect the lock > timeouts were due to high I/O load at the time. > Some stack traces: > {code} > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:478) > at > org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93) > at > org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:80) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:189) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doCommit(FileChannel.java:594) > at > org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151) > at > dataxu.flume.plugins.avro.AsyncAvroSink.process(AsyncAvroSink.java:548) > at > dataxu.flume.plugins.ClassLoaderFlumeSink.process(ClassLoaderFlumeSink.java:33) > at > org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68) > at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) > at java.lang.Thread.run(Thread.java:619) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:621) > at > org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194) > at > dataxu.flume.plugins.avro.AvroSource.appendBatch(AvroSource.java:209) > at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:91) > at org.apache.avro.ipc.Responder.respond(Responder.java:151) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:188) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:321) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callD
Re: Review Request 17915: FLUME-2307. Remove Log writeTimeout
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/17915/#review34101 --- Ship it! Ship It! - Jarek Cecho On Feb. 10, 2014, 8 p.m., Hari Shreedharan wrote: > > --- > This is an automatically generated e-mail. To reply, visit: > https://reviews.apache.org/r/17915/ > --- > > (Updated Feb. 10, 2014, 8 p.m.) > > > Review request for Flume. > > > Bugs: FLUME-2307 > https://issues.apache.org/jira/browse/FLUME-2307 > > > Repository: flume-git > > > Description > --- > > Remove Log writeTimeout. > > > Diffs > - > > > flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/FileChannel.java > 2cd7f03 > > flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/FileChannelConfiguration.java > 10ca11f > > flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/Log.java > 70106cb > flume-ng-doc/sphinx/FlumeUserGuide.rst d120a74 > > Diff: https://reviews.apache.org/r/17915/diff/ > > > Testing > --- > > All current tests pass > > > Thanks, > > Hari Shreedharan > >
[jira] [Updated] (FLUME-2307) Remove Log writetimeout
[ https://issues.apache.org/jira/browse/FLUME-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jarek Jarcec Cecho updated FLUME-2307: -- Description: I've observed Flume failing to clean up old log data in FileChannels. The amount of old log data can range anywhere from tens to hundreds of GB. I was able to confirm that the channels were in fact empty. This behavior always occurs after lock timeouts when attempting to put, take, rollback, or commit to a FileChannel. Once the timeout occurs, Flume stops cleaning up the old files. I was able to confirm that the Log's writeCheckpoint method was still being called and successfully obtaining a lock from tryLockExclusive(), but I was not able to confirm removeOldLogs being called. The application log did not include "Removing old file: log-xyz" for the old files which the Log class would output if they were correctly being removed. I suspect the lock timeouts were due to high I/O load at the time. Some stack traces: {code} org.apache.flume.ChannelException: Failed to obtain lock for writing to the log. Try increasing the log write timeout value. [channel=fileChannel] at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:478) at org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93) at org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:80) at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:189) org.apache.flume.ChannelException: Failed to obtain lock for writing to the log. Try increasing the log write timeout value. [channel=fileChannel] at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doCommit(FileChannel.java:594) at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151) at dataxu.flume.plugins.avro.AsyncAvroSink.process(AsyncAvroSink.java:548) at dataxu.flume.plugins.ClassLoaderFlumeSink.process(ClassLoaderFlumeSink.java:33) at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) at java.lang.Thread.run(Thread.java:619) org.apache.flume.ChannelException: Failed to obtain lock for writing to the log. Try increasing the log write timeout value. [channel=fileChannel] at org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:621) at org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168) at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194) at dataxu.flume.plugins.avro.AvroSource.appendBatch(AvroSource.java:209) at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:91) at org.apache.avro.ipc.Responder.respond(Responder.java:151) at org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:188) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75) at org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:321) at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:303) at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:220) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:94) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.processSelectedKeys(AbstractNioWorker.java:364) at org.jboss.netty.channel.soc
[jira] [Commented] (FLUME-2307) Remove Log writetimeout
[ https://issues.apache.org/jira/browse/FLUME-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13896990#comment-13896990 ] Brock Noland commented on FLUME-2307: - +1 I cannot test/commit as I am on an airplane. > Remove Log writetimeout > --- > > Key: FLUME-2307 > URL: https://issues.apache.org/jira/browse/FLUME-2307 > Project: Flume > Issue Type: Bug > Components: Channel >Affects Versions: v1.4.0 >Reporter: Steve Zesch >Assignee: Hari Shreedharan > Attachments: FLUME-2307-1.patch, FLUME-2307.patch > > > I've observed Flume failing to clean up old log data in FileChannels. The > amount of old log data can range anywhere from tens to hundreds of GB. I was > able to confirm that the channels were in fact empty. This behavior always > occurs after lock timeouts when attempting to put, take, rollback, or commit > to a FileChannel. Once the timeout occurs, Flume stops cleaning up the old > files. I was able to confirm that the Log's writeCheckpoint method was still > being called and successfully obtaining a lock from tryLockExclusive(), but I > was not able to confirm removeOldLogs being called. The application log did > not include "Removing old file: log-xyz" for the old files which the Log > class would output if they were correctly being removed. I suspect the lock > timeouts were due to high I/O load at the time. > Some stack traces: > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:478) > at > org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93) > at > org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:80) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:189) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doCommit(FileChannel.java:594) > at > org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151) > at > dataxu.flume.plugins.avro.AsyncAvroSink.process(AsyncAvroSink.java:548) > at > dataxu.flume.plugins.ClassLoaderFlumeSink.process(ClassLoaderFlumeSink.java:33) > at > org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68) > at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) > at java.lang.Thread.run(Thread.java:619) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:621) > at > org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194) > at > dataxu.flume.plugins.avro.AvroSource.appendBatch(AvroSource.java:209) > at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:91) > at org.apache.avro.ipc.Responder.respond(Responder.java:151) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:188) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:321) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:303) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:220) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstre
Re: Review Request 17883: FLUME-2318 Handling empty files in spooling directory
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/17883/ --- (Updated Feb. 10, 2014, 8:30 p.m.) Review request for Flume. Summary (updated) - FLUME-2318 Handling empty files in spooling directory Bugs: FLUME-2318 https://issues.apache.org/jira/browse/FLUME-2318 Repository: flume-git Description --- Spooling Directory Source should be able to create events for empty files. Currently no event is delivered to the channel for empty files. Diffs - flume-ng-core/src/main/java/org/apache/flume/client/avro/ReliableSpoolingFileEventReader.java a88ed6e flume-ng-core/src/test/java/org/apache/flume/source/TestSpoolDirectorySource.java 503ab4d Diff: https://reviews.apache.org/r/17883/diff/ Testing --- Spooling Directory with all empty files. Spooling directory with some empty files. Thanks, Muhammad Ehsan ul Haque
[jira] [Updated] (FLUME-2319) Incorrect property name of Ganglia Reporting documentation
[ https://issues.apache.org/jira/browse/FLUME-2319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Satoshi Iijima updated FLUME-2319: -- Attachment: FLUME-2319.patch > Incorrect property name of Ganglia Reporting documentation > -- > > Key: FLUME-2319 > URL: https://issues.apache.org/jira/browse/FLUME-2319 > Project: Flume > Issue Type: Documentation > Components: Docs >Affects Versions: v1.4.0 >Reporter: Satoshi Iijima >Priority: Trivial > Attachments: FLUME-2319.patch > > > Property name 'pollInterval' is incorrect in Ganglia Reporting documentation. > 'pollFrequency' is correct. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (FLUME-2319) Incorrect property name of Ganglia Reporting documentation
Satoshi Iijima created FLUME-2319: - Summary: Incorrect property name of Ganglia Reporting documentation Key: FLUME-2319 URL: https://issues.apache.org/jira/browse/FLUME-2319 Project: Flume Issue Type: Documentation Components: Docs Affects Versions: v1.4.0 Reporter: Satoshi Iijima Priority: Trivial Property name 'pollInterval' is incorrect in Ganglia Reporting documentation. 'pollFrequency' is correct. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (FLUME-2307) Remove Log writetimeout
[ https://issues.apache.org/jira/browse/FLUME-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Shreedharan updated FLUME-2307: Attachment: FLUME-2307-1.patch On second thought, the checkpoint write timeout can also be removed as the thread is scheduled with fixed delay, so once the checkpoint is written the next checkpoint is not attempted for at least checkpoint interval. Also updated the docs removing those two parameters. > Remove Log writetimeout > --- > > Key: FLUME-2307 > URL: https://issues.apache.org/jira/browse/FLUME-2307 > Project: Flume > Issue Type: Bug > Components: Channel >Affects Versions: v1.4.0 >Reporter: Steve Zesch >Assignee: Hari Shreedharan > Attachments: FLUME-2307-1.patch, FLUME-2307.patch > > > I've observed Flume failing to clean up old log data in FileChannels. The > amount of old log data can range anywhere from tens to hundreds of GB. I was > able to confirm that the channels were in fact empty. This behavior always > occurs after lock timeouts when attempting to put, take, rollback, or commit > to a FileChannel. Once the timeout occurs, Flume stops cleaning up the old > files. I was able to confirm that the Log's writeCheckpoint method was still > being called and successfully obtaining a lock from tryLockExclusive(), but I > was not able to confirm removeOldLogs being called. The application log did > not include "Removing old file: log-xyz" for the old files which the Log > class would output if they were correctly being removed. I suspect the lock > timeouts were due to high I/O load at the time. > Some stack traces: > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:478) > at > org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93) > at > org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:80) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:189) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doCommit(FileChannel.java:594) > at > org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151) > at > dataxu.flume.plugins.avro.AsyncAvroSink.process(AsyncAvroSink.java:548) > at > dataxu.flume.plugins.ClassLoaderFlumeSink.process(ClassLoaderFlumeSink.java:33) > at > org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68) > at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) > at java.lang.Thread.run(Thread.java:619) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:621) > at > org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194) > at > dataxu.flume.plugins.avro.AvroSource.appendBatch(AvroSource.java:209) > at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:91) > at org.apache.avro.ipc.Responder.respond(Responder.java:151) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:188) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:321) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:
Re: Review Request 17915: FLUME-2307. Remove Log writeTimeout
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/17915/ --- (Updated Feb. 10, 2014, 8 p.m.) Review request for Flume. Changes --- Removed checkpoint write timeout too. Removed the configuration parameters and removed the parameters from the documentation Bugs: FLUME-2307 https://issues.apache.org/jira/browse/FLUME-2307 Repository: flume-git Description --- Remove Log writeTimeout. Diffs (updated) - flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/FileChannel.java 2cd7f03 flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/FileChannelConfiguration.java 10ca11f flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/Log.java 70106cb flume-ng-doc/sphinx/FlumeUserGuide.rst d120a74 Diff: https://reviews.apache.org/r/17915/diff/ Testing --- All current tests pass Thanks, Hari Shreedharan
[jira] [Updated] (FLUME-2307) Remove Log writetimeout
[ https://issues.apache.org/jira/browse/FLUME-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Shreedharan updated FLUME-2307: Attachment: FLUME-2307.patch Removing Log write timeout. Leaving checkpoint write timeout as is - since the checkpoint thread will just try again later. I don't really see much value in removing that one. > Remove Log writetimeout > --- > > Key: FLUME-2307 > URL: https://issues.apache.org/jira/browse/FLUME-2307 > Project: Flume > Issue Type: Bug > Components: Channel >Affects Versions: v1.4.0 >Reporter: Steve Zesch >Assignee: Hari Shreedharan > Attachments: FLUME-2307.patch > > > I've observed Flume failing to clean up old log data in FileChannels. The > amount of old log data can range anywhere from tens to hundreds of GB. I was > able to confirm that the channels were in fact empty. This behavior always > occurs after lock timeouts when attempting to put, take, rollback, or commit > to a FileChannel. Once the timeout occurs, Flume stops cleaning up the old > files. I was able to confirm that the Log's writeCheckpoint method was still > being called and successfully obtaining a lock from tryLockExclusive(), but I > was not able to confirm removeOldLogs being called. The application log did > not include "Removing old file: log-xyz" for the old files which the Log > class would output if they were correctly being removed. I suspect the lock > timeouts were due to high I/O load at the time. > Some stack traces: > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:478) > at > org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93) > at > org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:80) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:189) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doCommit(FileChannel.java:594) > at > org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151) > at > dataxu.flume.plugins.avro.AsyncAvroSink.process(AsyncAvroSink.java:548) > at > dataxu.flume.plugins.ClassLoaderFlumeSink.process(ClassLoaderFlumeSink.java:33) > at > org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68) > at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) > at java.lang.Thread.run(Thread.java:619) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:621) > at > org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194) > at > dataxu.flume.plugins.avro.AvroSource.appendBatch(AvroSource.java:209) > at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:91) > at org.apache.avro.ipc.Responder.respond(Responder.java:151) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:188) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:321) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:303) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:220) >
[jira] [Updated] (FLUME-2307) Remove Log writetimeout
[ https://issues.apache.org/jira/browse/FLUME-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hari Shreedharan updated FLUME-2307: Assignee: Hari Shreedharan Summary: Remove Log writetimeout (was: Old log data is not cleaned up) > Remove Log writetimeout > --- > > Key: FLUME-2307 > URL: https://issues.apache.org/jira/browse/FLUME-2307 > Project: Flume > Issue Type: Bug > Components: Channel >Affects Versions: v1.4.0 >Reporter: Steve Zesch >Assignee: Hari Shreedharan > Attachments: FLUME-2307.patch > > > I've observed Flume failing to clean up old log data in FileChannels. The > amount of old log data can range anywhere from tens to hundreds of GB. I was > able to confirm that the channels were in fact empty. This behavior always > occurs after lock timeouts when attempting to put, take, rollback, or commit > to a FileChannel. Once the timeout occurs, Flume stops cleaning up the old > files. I was able to confirm that the Log's writeCheckpoint method was still > being called and successfully obtaining a lock from tryLockExclusive(), but I > was not able to confirm removeOldLogs being called. The application log did > not include "Removing old file: log-xyz" for the old files which the Log > class would output if they were correctly being removed. I suspect the lock > timeouts were due to high I/O load at the time. > Some stack traces: > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:478) > at > org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93) > at > org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:80) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:189) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doCommit(FileChannel.java:594) > at > org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151) > at > dataxu.flume.plugins.avro.AsyncAvroSink.process(AsyncAvroSink.java:548) > at > dataxu.flume.plugins.ClassLoaderFlumeSink.process(ClassLoaderFlumeSink.java:33) > at > org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68) > at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) > at java.lang.Thread.run(Thread.java:619) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:621) > at > org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194) > at > dataxu.flume.plugins.avro.AvroSource.appendBatch(AvroSource.java:209) > at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:91) > at org.apache.avro.ipc.Responder.respond(Responder.java:151) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:188) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:321) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:303) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:220) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.
Review Request 17915: FLUME-2307. Remove Log writeTimeout
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/17915/ --- Review request for Flume. Bugs: FLUME-2307 https://issues.apache.org/jira/browse/FLUME-2307 Repository: flume-git Description --- Remove Log writeTimeout. Diffs - flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/FileChannel.java 2cd7f03 flume-ng-channels/flume-file-channel/src/main/java/org/apache/flume/channel/file/Log.java 70106cb Diff: https://reviews.apache.org/r/17915/diff/ Testing --- All current tests pass Thanks, Hari Shreedharan
[jira] [Commented] (FLUME-2307) Old log data is not cleaned up
[ https://issues.apache.org/jira/browse/FLUME-2307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13896792#comment-13896792 ] Hari Shreedharan commented on FLUME-2307: - I will submit a patch later today removing the writeTimeout. > Old log data is not cleaned up > -- > > Key: FLUME-2307 > URL: https://issues.apache.org/jira/browse/FLUME-2307 > Project: Flume > Issue Type: Bug > Components: Channel >Affects Versions: v1.4.0 >Reporter: Steve Zesch > > I've observed Flume failing to clean up old log data in FileChannels. The > amount of old log data can range anywhere from tens to hundreds of GB. I was > able to confirm that the channels were in fact empty. This behavior always > occurs after lock timeouts when attempting to put, take, rollback, or commit > to a FileChannel. Once the timeout occurs, Flume stops cleaning up the old > files. I was able to confirm that the Log's writeCheckpoint method was still > being called and successfully obtaining a lock from tryLockExclusive(), but I > was not able to confirm removeOldLogs being called. The application log did > not include "Removing old file: log-xyz" for the old files which the Log > class would output if they were correctly being removed. I suspect the lock > timeouts were due to high I/O load at the time. > Some stack traces: > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doPut(FileChannel.java:478) > at > org.apache.flume.channel.BasicTransactionSemantics.put(BasicTransactionSemantics.java:93) > at > org.apache.flume.channel.BasicChannelSemantics.put(BasicChannelSemantics.java:80) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:189) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doCommit(FileChannel.java:594) > at > org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151) > at > dataxu.flume.plugins.avro.AsyncAvroSink.process(AsyncAvroSink.java:548) > at > dataxu.flume.plugins.ClassLoaderFlumeSink.process(ClassLoaderFlumeSink.java:33) > at > org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68) > at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147) > at java.lang.Thread.run(Thread.java:619) > org.apache.flume.ChannelException: Failed to obtain lock for writing to the > log. Try increasing the log write timeout value. [channel=fileChannel] > at > org.apache.flume.channel.file.FileChannel$FileBackedTransaction.doRollback(FileChannel.java:621) > at > org.apache.flume.channel.BasicTransactionSemantics.rollback(BasicTransactionSemantics.java:168) > at > org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:194) > at > dataxu.flume.plugins.avro.AvroSource.appendBatch(AvroSource.java:209) > at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at > org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:91) > at org.apache.avro.ipc.Responder.respond(Responder.java:151) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:188) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75) > at > org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173) > at > org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) > at > org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:792) > at > org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:321) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:303) > at > org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:220) > at > org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75) > at > org.jboss.netty.channel.DefaultCh