Hi, Below is a config I use to test out the FileChannel. See the comments "##" for how messages are sent from one host to another.
node105.sources = stressSource node105.channels = fileChannel node105.sinks = avroSink node105.sources.stressSource.type = org.apache.flume.source.StressSource node105.sources.stressSource.batchSize = 1000 node105.sources.stressSource.channels = fileChannel ## Sink sends avro messages to node103.bashkew.com port 9432 node105.sinks.avroSink.type = avro node105.sinks.avroSink.batch-size = 1000 node105.sinks.avroSink.channel = fileChannel node105.sinks.avroSink.hostname = node102.bashkew.com node105.sinks.avroSink.port = 9432 node105.channels.fileChannel.type = file node105.channels.fileChannel.checkpointDir = /tmp/flume/checkpoints node105.channels.fileChannel.dataDirs = /tmp/flume/data1,/tmp/flume/data2,/tmp/flume/data3 node105.channels.fileChannel.capacity = 10000 node105.channels.fileChannel.checkpointInterval = 3000 node105.channels.fileChannel.maxFileSize = 5242880 node102.sources = avroSource node102.channels = fileChannel node102.sinks = nullSink ## Source listens for avro messages on port 9432 on all ips node102.sources.avroSource.type = avro node102.sources.avroSource.channels = fileChannel node102.sources.avroSource.bind = 0.0.0.0 node102.sources.avroSource.port = 9432 node102.sinks.nullSink.type = null node102.sinks.nullSink.batchSize = 1000 node102.sinks.nullSink.channel = fileChannel node102.channels.fileChannel.type = file node102.channels.fileChannel.checkpointDir = /tmp/flume/checkpoints node102.channels.fileChannel.dataDirs = /tmp/flume/data1,/tmp/flume/data2,/tmp/flume/data3 node102.channels.fileChannel.capacity = 5000 node102.channels.fileChannel.checkpointInterval = 45000 node102.channels.fileChannel.maxFileSize = 5242880 On Wed, Sep 12, 2012 at 10:06 AM, Cochran, David M (Contractor) <[email protected]> wrote: > Okay folks, after spending the better part of a week reading the docs and > experimenting I'm lost. I have flume 1.3.x working pretty much as expected > on a single host. It tails a log file and writes it to another rolling log > file via flume. No problem there, seems to work flawlessly. Where my issue > is trying to break apart the functions across multiple hosts... a single > host listening for others to send their logs to. All of my efforts have > resulted in little more than headaches. > > I can't even see the specified port open on what should be the logging host. > I've tried the basic examples posted on different docs but can't seem to get > things working across multiple hosts. > > Would someone post a working example of the conf's needed to get me started? > Something simple that works, so I can them pick it apart to gain more > understanding. Apparently, I just don't yet have a firm enough grasp on all > the pieces yet, but want to learn! > > Thanks in advance! > Dave > > -- Apache MRUnit - Unit testing MapReduce - http://incubator.apache.org/mrunit/
