[
https://issues.apache.org/jira/browse/FLUME-1379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
jack li resolved FLUME-1379.
----------------------------
Resolution: Fixed
> avro source , hdfs sink, no event writed to hdfs
> -------------------------------------------------
>
> Key: FLUME-1379
> URL: https://issues.apache.org/jira/browse/FLUME-1379
> Project: Flume
> Issue Type: Question
> Components: Sinks+Sources
> Affects Versions: v1.1.0, v1.3.0
> Environment: rhel5.7 ,jdk1.6,yum installed flume-ng
> 1.1.0+120-1.cdh4.0.0 ,mvn install 1.3.
> Reporter: jack li
> Labels: flume
> Original Estimate: 504h
> Remaining Estimate: 504h
>
> hello everyone!
> 1, source's type is exec and sink's type is hdfs .the event can write to the
> hdfs.But when I change the source's type to avro ,no event write to hdfs.
> 2, flume.conf
> #List sources, sinks and channels in the agent
> weblog-agent.sources = s1
> weblog-agent.sinks = avro-forward-sink01
> weblog-agent.channels = jdbc-channel01
> #define the flow
> #webblog-agent sources config
> weblog-agent.sources.s1.channels = jdbc-channel01
> weblog-agent.sources.s1.type = avro
> weblog-agent.sources.s1.port = 41414
> weblog-agent.sources.s1.bind = 0.0.0.0
> #avro sink properties
> weblog-agent.sinks.avro-forward-sink01.channel = jdbc-channel01
> weblog-agent.sinks.avro-forward-sink01.type = hdfs
> weblog-agent.sinks.avro-forward-sink01.hdfs.path =
> hdfs://hadoop-master.:9000/user/flume/webtest
> #weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_%p
> #weblog-agent.sinks.avro-forward-sink01.hdfs.filePrefix = nginx_web001
> weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosPrincipal =
> [email protected]
> weblog-agent.sinks.avro-forward-sink01.hdfs.kerberosKeytab =
> /var/run/flume-ng/flume.keytab
> weblog-agent.sinks.avro-forward-sink01.hdfs.maxOpenFiles = 65535
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollSize = 0
> weblog-agent.sinks.avro-forward-sink01.hdfs.rollCount = 0
> weblog-agent.sinks.avro-forward-sink01.hdfs.batchSize = 30000
> weblog-agent.sinks.avro-forward-sink01.hdfs.txnEventMax = 20000
> weblog-agent.sinks.avro-forward-sink01.hdfs.fileType = DataStream
> #channels config
> weblog-agent.channels.jdbc-channel01.type = memory
> weblog-agent.channels.jdbc-channel01.capacity = 200000
> weblog-agent.channels.jdbc-channel01.transactionCapacity = 20000
> 3,flume.log
> 2012-07-18 23:53:31,245 INFO lifecycle.LifecycleSupervisor: Starting
> lifecycle supervisor 1
> 2012-07-18 23:53:31,247 INFO node.FlumeNode: Flume node starting -
> weblog-agent
> 2012-07-18 23:53:31,251 INFO nodemanager.DefaultLogicalNodeManager: Node
> manager starting
> 2012-07-18 23:53:31,252 INFO lifecycle.LifecycleSupervisor: Starting
> lifecycle supervisor 10
> 2012-07-18 23:53:31,252 INFO properties.PropertiesFileConfigurationProvider:
> Configuration provider starting
> 2012-07-18 23:53:31,252 DEBUG nodemanager.DefaultLogicalNodeManager: Node
> manager started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider:
> Configuration provider started
> 2012-07-18 23:53:31,254 DEBUG properties.PropertiesFileConfigurationProvider:
> Checking file:flume.conf for changes
> 2012-07-18 23:53:31,254 INFO properties.PropertiesFileConfigurationProvider:
> Reloading configuration file:flume.conf
> 2012-07-18 23:53:31,261 INFO conf.FlumeConfiguration:
> Processing:avro-forward-sink01
> 2012-07-18 23:53:31,261 DEBUG conf.FlumeConfiguration: Created context for
> avro-forward-sink01: hdfs.fileType
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration:
> Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration:
> Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration:
> Processing:avro-forward-sink01
> 2012-07-18 23:53:31,262 INFO conf.FlumeConfiguration:
> Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration:
> Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration:
> Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration: Added sinks:
> avro-forward-sink01 Agent: weblog-agent
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration:
> Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration:
> Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration:
> Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 INFO conf.FlumeConfiguration:
> Processing:avro-forward-sink01
> 2012-07-18 23:53:31,263 DEBUG conf.FlumeConfiguration: Starting validation of
> configuration for agent: weblog-agent, initial-configuration:
> AgentConfiguration[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro,
> bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000,
> capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream,
> [email protected], hdfs.txnEventMax=20000,
> hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab,
> hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest,
> hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs,
> channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,271 DEBUG conf.FlumeConfiguration: Created channel
> jdbc-channel01
> 2012-07-18 23:53:31,289 DEBUG conf.FlumeConfiguration: Creating sink:
> avro-forward-sink01 using HDFS
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Post validation
> configuration for weblog-agent
> AgentConfiguration created without Configuration stubs for which only basic
> syntactical validation was performed[weblog-agent]
> SOURCES: {s1={ parameters:{port=41414, channels=jdbc-channel01, type=avro,
> bind=0.0.0.0} }}
> CHANNELS: {jdbc-channel01={ parameters:{transactionCapacity=20000,
> capacity=200000, type=memory} }}
> SINKS: {avro-forward-sink01={ parameters:{hdfs.fileType=DataStream,
> [email protected], hdfs.txnEventMax=20000,
> hdfs.kerberosKeytab=/var/run/flume-ng/flume.keytab,
> hdfs.path=hdfs://hadoop-master.:9000/user/flume/webtest,
> hdfs.batchSize=30000, hdfs.maxOpenFiles=65535, hdfs.rollSize=0, type=hdfs,
> channel=jdbc-channel01, hdfs.rollCount=0} }}
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Channels:jdbc-channel01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sinks
> avro-forward-sink01
> 2012-07-18 23:53:31,292 DEBUG conf.FlumeConfiguration: Sources s1
> 2012-07-18 23:53:31,292 INFO conf.FlumeConfiguration: Post-validation flume
> configuration contains configuration for agents: [weblog-agent]
> 2012-07-18 23:53:31,292 INFO properties.PropertiesFileConfigurationProvider:
> Creating channels
> 2012-07-18 23:53:31,292 DEBUG channel.DefaultChannelFactory: Creating
> instance of channel jdbc-channel01 type memory
> 2012-07-18 23:53:31,365 INFO instrumentation.MonitoredCounterGroup:
> Monitoried counter group for type: CHANNEL, name: jdbc-channel01, registered
> successfully.
> 2012-07-18 23:53:31,365 INFO properties.PropertiesFileConfigurationProvider:
> created channel jdbc-channel01
> 2012-07-18 23:53:31,365 DEBUG source.DefaultSourceFactory: Creating instance
> of source s1, type avro
> 2012-07-18 23:53:31,380 INFO instrumentation.MonitoredCounterGroup:
> Monitoried counter group for type: SOURCE, name: s1, registered successfully.
> 2012-07-18 23:53:31,395 INFO sink.DefaultSinkFactory: Creating instance of
> sink: avro-forward-sink01, type: hdfs
> 2012-07-18 23:53:31,540 DEBUG security.Groups: Creating new Groups object
> 2012-07-18 23:53:31,625 DEBUG security.Groups: Group mapping
> impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping;
> cacheTimeout=300000
> 2012-07-18 23:53:31,640 INFO hdfs.HDFSEventSink: Hadoop Security enabled: true
> 2012-07-18 23:53:31,789 INFO hdfs.HDFSEventSink: { Sink type:HDFSEventSink,
> name:avro-forward-sink01 }: Attempting kerberos login as principal
> ([email protected]) from keytab file (/var/run/flume-ng/flume.keytab)
> 2012-07-18 23:53:31,999 INFO security.UserGroupInformation: Login successful
> for user [email protected] using keytab file /var/run/flume-ng/flume.keytab
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Auth method: KERBEROS
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: User name: [email protected]
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Using keytab: true
> 2012-07-18 23:53:32,000 INFO hdfs.HDFSEventSink: Logged in as user
> [email protected]
> 2012-07-18 23:53:32,002 INFO instrumentation.MonitoredCounterGroup:
> Monitoried counter group for type: SINK, name: avro-forward-sink01,
> registered successfully.
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting
> new configuration:{ sourceRunners:{s1=EventDrivenSourceRunner: { source:Avro
> source s1: { bindAddress: 0.0.0.0, port: 41414 } }}
> sinkRunners:{avro-forward-sink01=SinkRunner: {
> policy:org.apache.flume.sink.DefaultSinkProcessor@6c1989b5 counterGroup:{
> name:null counters:{} } }}
> channels:{jdbc-channel01=org.apache.flume.channel.MemoryChannel@a00185} }
> 2012-07-18 23:53:32,004 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Channel jdbc-channel01
> 2012-07-18 23:53:32,005 INFO nodemanager.DefaultLogicalNodeManager: Waiting
> for channel: jdbc-channel01 to start. Sleeping for 500 ms
> 2012-07-18 23:53:32,005 INFO instrumentation.MonitoredCounterGroup: Component
> type: CHANNEL, name: jdbc-channel01 started
> 2012-07-18 23:53:32,507 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Sink avro-forward-sink01
> 2012-07-18 23:53:32,508 INFO nodemanager.DefaultLogicalNodeManager: Starting
> Source s1
> 2012-07-18 23:53:32,508 INFO source.AvroSource: Starting Avro source s1: {
> bindAddress: 0.0.0.0, port: 41414 }...
> 2012-07-18 23:53:32,511 INFO instrumentation.MonitoredCounterGroup: Component
> type: SINK, name: avro-forward-sink01 started
> 2012-07-18 23:53:32,514 DEBUG flume.SinkRunner: Polling sink runner starting
> 2012-07-18 23:53:32,941 INFO instrumentation.MonitoredCounterGroup: Component
> type: SOURCE, name: s1 started
> 2012-07-18 23:53:32,941 INFO source.AvroSource: Avro source s1 started.
> 2012-07-18 23:54:02,514 DEBUG properties.PropertiesFileConfigurationProvider:
> Checking file:flume.conf for changes
> 2012-07-18 23:54:32,521 DEBUG properties.PropertiesFileConfigurationProvider:
> Checking file:flume.conf for changes
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira