It will be very helpful if I get some sample for custom interceptor.
Or some steps to follow..
-Prajakta
DISCLAIMER
==
This e-mail may contain privileged and confidential information which is the
property of Persistent Systems Ltd. It is intended only for the use of the
individual or en
I want to see logs in log file
I hit this command
bin/flume-ng agent -c conf -f conf/flume.conf -Dflume.root.loggerEBUG,LOGGER
-n agent1
and settings done in log4j.properties file:
log4j.appender.LOGGER=org.apache.log4j.RollingFileAppender
log4j.appender.LOGGER.MaxFileSize=100MB
log4j.appender.
Thanks, Chris!
I'v changed "agent.sinks.hdfsSink.hdfs.file.Type" to
"agent.sinks.hdfsSink.hdfs.fileType", and the problem is solved.
Thank you for your help!
Hao Jian
2012-09-13
From: Chris Neal
Date: 2012-09-12 22:27
To: user@flume.apache.org; haojian
Subject: Re: data saved in hdfs by flume
Nevermind, it doesn't look like FILE_ROLL supports batching
On Wed, Sep 12, 2012 at 4:56 PM, Brock Noland wrote:
> It looks like have a batch size of 1000 which could mean the sink is
> waiting for a 1000 entries...
>
> node102.sinks.filesink1.batchSize = 1000
>
>
>
> On Wed, Sep 12, 2012 at
It looks like have a batch size of 1000 which could mean the sink is
waiting for a 1000 entries...
node102.sinks.filesink1.batchSize = 1000
On Wed, Sep 12, 2012 at 3:12 PM, Cochran, David M (Contractor)
wrote:
> Putting a copy of hadoop-core.jar in the lib directory did the trick.. at
> least
Putting a copy of hadoop-core.jar in the lib directory did the trick.. at least
it made the errors go away..
Just trying to sort out why nothing is getting written to the sink's files
now... but when I add entries to the file being tailed nothing makes it to the
sink log file(s). guess I need t
Yeah that is my fault. FileChannel uses a few hadoop classes for
serialization. I want to get rid of that but it's just not a priority
item. You either need the hadoop command in your path or the
hadoop-core.jar in your lib directory.
On Wed, Sep 12, 2012 at 1:38 PM, Cochran, David M (Contractor)
Brock,
Thanks for the sample! Starting to see a bit more light and making a little
more sense now...
If you wouldn't mind and have a couple mins to spare...I'm getting this error
and not sure how to make it go away.. I can not use hadoop for storage instead
just FILE_ROLL (ultimately the logs
Thanks Brock!
-Original Message-
From: Brock Noland [mailto:br...@cloudera.com]
Sent: Wednesday, September 12, 2012 8:54 PM
To: user@flume.apache.org
Subject: Re: how to use interceptor
Try the following and if you have an error, please post the entire message
including the stacktrace.
Try the following and if you have an error, please post the entire
message including the stacktrace.
agent1.sources.avro-source1.interceptors = a b
agent1.sources.avro-source1.interceptors.a.type = HOST
agent1.sources.avro-source1.interceptors.a.hostHeader = hostname
agent1.sources.avro-source1.i
I am new user of flume.
Please let me know this
I have included following lines, but getting error : not able to initialize
HostInterceptor$Builder
agent1.sources.avro-source1.interceptors =a b
agent1.sources.avro-source1.interceptors.a.type =
org.apache.flume.interceptor.HostInterceptor$Build
Hi,
Below is a config I use to test out the FileChannel. See the comments
"##" for how messages are sent from one host to another.
node105.sources = stressSource
node105.channels = fileChannel
node105.sinks = avroSink
node105.sources.stressSource.type = org.apache.flume.source.StressSource
node1
Okay folks, after spending the better part of a week reading the docs and
experimenting I'm lost. I have flume 1.3.x working pretty much as expected on
a single host. It tails a log file and writes it to another rolling log file
via flume. No problem there, seems to work flawlessly. Where my
Hi,
The log4J appender doesn't pay any attention to the PatternLayout, at least
up through 1.3.0-SNAPSHOT. That's why you only see %m and nothing else.
As for the garbled text, your property of this:
agent.sinks.hdfsSink.hdfs.file.Type
should be this:
agent.sinks.hdfsSink.hdfs.fileType
Since
Hi,
I'm using flume ng(1.2.0) to collect logs from log4j and save logs to hdfs .
There are two problems:
(1) Flume only collect %m in log4j, but not %d, %p, %t ...
(2) The log saved in hdfs is garbled, not plain text.
My log4j configuration is as fowllows:
The console print is:
20
15 matches
Mail list logo