Hi David,
Thanks for the example. I have set it just like above, but it only generate
for the first 15 minutes. After waiting for more than one hour, there is no
update at all in the s3 bucket.
Thanks.
Martinus
On Wed, Oct 23, 2013 at 8:48 PM, David Sinclair <
dsincl...@chariotsolutions.com> w
You want to use a 'selector' not an 'interceptor'. What your config is
currently doing is stamping every event with a header 'category' containing the
value 'dataset4'.
Remove the interceptor stuff and try adding these:
#add a new channel and sink for 'dropped' data
agent1.sinks = sink1 sink2
a
Here is my config file below. This simply ends up dumping all the input from
the scribe source into the files in the sink directory. I have 5 different
scribe categories coming into the scribe source. In this config file I'm
attempting to grab only the incoming data that has the scribe catego
why dont you share the config you have so far. perhaps somebody here can
comment on it.
On Wed, Oct 23, 2013 at 7:48 AM, wrote:
> Ok, the place where I am stuck is trying to understand what the flume
> config file looks like to do this. What does the config for the scribe
> source look like.
The sink is dependent on a header with the key "timestamp" being present in
the event for this to work. What source are you using?
On Wed, Oct 23, 2013 at 11:25 AM, Deepak Subhramanian <
deepak.subhraman...@gmail.com> wrote:
> Hi ,
>
> I am trying to store my logs in folders named with date for
Hi ,
I am trying to store my logs in folders named with date for my file_roll
and hdfssink. For some reason when I pass %d%m%Y in the sink directory it
is not working . Any thoughts .
My Flume source is a simple HTTP Handler extended from HTTPSourceHandler
tier1.sinks.filesink1.type = file_roll
Thanks Roshan. I increased the heapsize and it worked fine.
On Fri, Oct 18, 2013 at 11:55 PM, Roshan Naik wrote:
> the error indicates that source is pumping in data faster than the sink is
> draining the memory channel. causing the channel to fill up. it does not
> appear to be a MemCh issue.
>
Ok, the place where I am stuck is trying to understand what the flume config
file looks like to do this. What does the config for the scribe source look
like. I have used the config lines for a scribe source that I found in the
flume docs. But I'm not seeing the scribe source split up any dat
Hi,
I am trying to get the events from Log4J 1x into HDFS through Flume using
the Log4J Flume appender. Created two appenders FILE and flume. It works
for the FILE appender, but with the flume appender the program just hangs
in Eclipse. Flume works properly, I am able to send messages to the avro
You can set all of the time/size based rolling policies to zero and set an
idle timeout on the sink. Below has a 15 minute timeout
agent.sinks.sink.hdfs.fileSuffix = FlumeData.%Y-%m-%d
agent.sinks.sink.hdfs.fileType = DataStream
agent.sinks.sink.hdfs.rollInterval = 0
agent.sinks.sink.hdfs.rollSize
10 matches
Mail list logo