[ 
https://issues.apache.org/jira/browse/FLUME-719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13078659#comment-13078659
 ] 

Jonathan Hsieh commented on FLUME-719:
--------------------------------------

Quoting Damien Hardy 

{quote}
Flume was working well with CDH3 but now, after upgrade, fails on décorators 
every time I want to inject data.
Here is my conf for the node :
iflume02.int.adencf.local       Wed Jul 13 11:09:37 CEST 2011   default-flow    
collectorSource digest("MD5", "digest",base64=true) haproxyLogExtractor() [ 
hbase("log", "%Y%m%d%H%M%S.%{nanos}-%{digest}", "default", "body", 
"%{body}",writeBufferSize=50000), amqpShuntingSink(), console]

haproxyLogExtractor is a tiny décorator witch extract values from haproxy logs
hbase is a tiny sink wich write data in hbase
amqpShuntingSink is a tiny sink writing data in amqp message broker.
This is the log of flume-node (failling on the builtin Digest decorator) but if 
I remove the digest décorator it fail on the haproxyLogExtractor() with the 
same exception on append().

2011-07-26 12:12:42,916 INFO org.apache.zookeeper.ZooKeeper: Initiating client 
connection, 
connectString=hadoop002.back.adencf.local:2181,hadoop001.back.adencf.local:2181 
sessionTimeout=180000 watcher=hconnection
2011-07-26 12:12:42,926 INFO org.apache.zookeeper.ClientCnxn: Opening socket 
connection to server hadoop002.back.adencf.local/192.168.130.93:2181
2011-07-26 12:12:42,931 INFO org.apache.zookeeper.ClientCnxn: Socket connection 
established to hadoop002.back.adencf.local/192.168.130.93:2181, initiating 
session
2011-07-26 12:12:42,951 INFO org.apache.zookeeper.ClientCnxn: Session 
establishment complete on server 
hadoop002.back.adencf.local/192.168.130.93:2181, sessionid = 0x23165e59a790000, 
negotiated timeout = 40000
2011-07-26 12:12:43,052 INFO com.figarocms.flume.HBaseSink: HBase sink 
successfully opened
2011-07-26 12:12:43,070 INFO 
org.springframework.context.support.ClassPathXmlApplicationContext: Refreshing 
org.springframework.context.support.ClassPathXmlApplicationContext@5b6b9e62: 
startup date [Tue Jul 26 12:12:43 CEST 2011]; root of context hierarchy
2011-07-26 12:12:43,103 INFO 
org.springframework.beans.factory.xml.XmlBeanDefinitionReader: Loading XML bean 
definitions from class path resource [router.configuration.xml]
2011-07-26 12:12:43,184 INFO 
org.springframework.beans.factory.xml.XmlBeanDefinitionReader: Loading XML bean 
definitions from class path resource [routes.configuration.xml]
2011-07-26 12:12:43,234 INFO 
org.springframework.beans.factory.support.DefaultListableBeanFactory: 
Pre-instantiating singletons in 
org.springframework.beans.factory.support.DefaultListableBeanFactory@303ec561: 
defining beans 
[amqp,routes,rabbitConnectionFactory,JsonMessageConverter,rabbit.template,amqpAdmin,router];
 root of factory hierarchy
2011-07-26 12:12:43,274 INFO 
org.springframework.beans.factory.config.PropertiesFactoryBean: Loading 
properties file from class path resource [amqp.properties]
2011-07-26 12:12:43,354 INFO 
com.cloudera.flume.handlers.debug.ConsoleEventSink: ConsoleEventSink( debug ) 
opened
2011-07-26 12:14:11,981 ERROR com.cloudera.flume.core.connector.DirectDriver: 
Closing down due to exception during append calls
java.lang.UnsupportedOperationException
at java.util.Collections$UnmodifiableMap.put(Collections.java:1285)
at com.cloudera.flume.core.EventBaseImpl.set(EventBaseImpl.java:65)
at com.cloudera.flume.core.DigestDecorator.append(DigestDecorator.java:59)
at 
com.cloudera.flume.core.connector.DirectDriver$PumperThread.run(DirectDriver.java:110)
2011-07-26 12:14:11,982 INFO com.cloudera.flume.core.connector.DirectDriver: 
Connector logicalNode iflume01.int.adencf.local-19 exited with error: null
java.lang.UnsupportedOperationException
at java.util.Collections$UnmodifiableMap.put(Collections.java:1285)
at com.cloudera.flume.core.EventBaseImpl.set(EventBaseImpl.java:65)
at com.cloudera.flume.core.DigestDecorator.append(DigestDecorator.java:59)
at 
com.cloudera.flume.core.connector.DirectDriver$PumperThread.run(DirectDriver.java:110)
2011-07-26 12:14:11,982 INFO com.cloudera.flume.collector.CollectorSource: 
closed
2011-07-26 12:14:12,982 INFO 
com.cloudera.flume.handlers.thrift.ThriftEventSource: Closed server on port 
35853...
2011-07-26 12:14:12,982 INFO 
com.cloudera.flume.handlers.thrift.ThriftEventSource: Queue still has 0 
elements ...
2011-07-26 12:14:12,983 INFO com.figarocms.flume.HBaseSink: HBase sink 
successfully closed
{quote}

> Flume attribute field map from Avro|ThriftEventConvertUtil.toFlumeEvent() 
> should be mutable
> -------------------------------------------------------------------------------------------
>
>                 Key: FLUME-719
>                 URL: https://issues.apache.org/jira/browse/FLUME-719
>             Project: Flume
>          Issue Type: Bug
>          Components: Sinks+Sources
>    Affects Versions: v0.9.4
>            Reporter: Jonathan Hsieh
>            Priority: Critical
>             Fix For: v0.9.5
>
>
> This fixes the root cause of the bug created by the FLUME-620 fix.  The 
> problem was that the getAttrs methods in both returned immutable hashtables.  
> This makes the conversion methods return mutable tables, and tests for these 
> cases.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Reply via email to