[
https://issues.apache.org/jira/browse/FLUME-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16107061#comment-16107061
]
Snehal Waghmare commented on FLUME-3137:
----------------------------------------
Temporarily we are moving files continuously from the processed folder to as to
prevent the bulk pushing of the data.
Also, updated the JAVA_OPTS configuration with following configuration:
"-Xms2000m -Xmx16000m -Xss128k -XX:MaxDirectMemorySize=256m -XX:+UseParNewGC
-XX:+UseConcMarkSweepGC -XX:-UseGCOverheadLimit"
Kindly help me solve the above issue.
> GC overhead limit exceeded in KafkaSink
> ---------------------------------------
>
> Key: FLUME-3137
> URL: https://issues.apache.org/jira/browse/FLUME-3137
> Project: Flume
> Issue Type: Bug
> Reporter: Snehal Waghmare
>
> My configuration file:
> tier2.sources = source1
> tier2.channels = channel
> tier2.sinks = sink1
> tier2.sources.source1.type = tailDir
> tier2.sources.source1.channels = channel
> tier2.sources.source1.filegroups = f1
> tier2.sources.source1.filegroups.f1 = /path-to-logs/processed_logs/.*.log*
> tier2.sources.source1.positionFile =
> /path-to-file/apache-flume-1.7.0bin/.flume/taildir_position.json
> tier2.channels.channel.type = memory
> tier2.channels.channel.capacity = 10000
> tier2.channels.channel.transactionCapacity = 10000
> tier2.channels.channel.use-fast-replay = true
> tier2.sinks.sink1.channel = channel
> tier2.sinks.sink1.type = org.apache.flume.sink.kafka.KafkaSink
> tier2.sinks.sink1.kafka.topic = topicname
> tier2.sinks.sink1.kafka.bootstrap.servers=kafkaserverIP:9092
> tier2.sinks.sink1.kafka.flumeBatchSize = 1000
> Number of log files are located at folder "/path-to-logs/processed_logs/".
> Now when flume agent is started it tries to send a few number of files but
> after that gives the following error:
> [ERROR -
> org.apache.flume.source.taildir.TaildirSource.writePosition(TaildirSource.java:334)]
> Failed writing positionFile
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> at com.google.common.collect.ImmutableMap.of(ImmutableMap.java:95)
> at
> org.apache.flume.source.taildir.TaildirSource.toPosInfoJson(TaildirSource.java:349)
> at
> org.apache.flume.source.taildir.TaildirSource.writePosition(TaildirSource.java:330)
> at
> org.apache.flume.source.taildir.TaildirSource.access$600(TaildirSource.java:59)
> at
> org.apache.flume.source.taildir.TaildirSource$PositionWriterRunnable.run(TaildirSource.java:320)
> at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
> at java.util.concurrent.FutureTask.runAndReset(Unknown Source)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(Unknown
> Source)
> at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(Unknown
> Source)
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
> at java.lang.Thread.run(Unknown Source)
> [ERROR -
> org.apache.flume.source.taildir.TaildirSource.process(TaildirSource.java:236)]
> Unable to tail files
> java.lang.OutOfMemoryError: GC overhead limit exceeded
> Finally, it is unable to tail the remaining files. This inhibits the further
> flow of data.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)