Hi Chhaya, First, it looks like one agent should be enough. Don't run agents on the Hadoop cluster itself (i.e not on data nodes). You can give it its own machine, share it with other "edge node" services (like Hue) or install it on MQ machine (if the machine is not too busy).
Second, destination should have probably been named "source", i.e. thats the queue or topic that contains the data in JMS. There is a nice example in the docs: a1.sources = r1a1.channels = c1a1.sources.r1.type = jmsa1.sources.r1.channels = c1a1.sources.r1.initialContextFactory = org.apache.activemq.jndi.ActiveMQInitialContextFactorya1.sources.r1.connectionFactory = GenericConnectionFactorya1.sources.r1.providerURL = tcp://mqserver:61616a1.sources.r1.destinationName = BUSINESS_DATAa1.sources.r1.destinationType = QUEUE On Thu, May 7, 2015 at 1:50 AM, Vishwakarma, Chhaya < [email protected]> wrote: > Hi All, > > > > I want to read data from IBM MQ and put it into HDFs. > > > > Looked into JMS source of flume, seems it can connect to IBM MQ, but I’m > not understanding what does “destinationType” and “destinationName” mean in > the list of required properties. Can someone please explain? > > > > Also, how I should be configuring my flume agents > > > > flumeAgent1(runs on the machine same as MQ) reads MQ data ----à > flumeAgent2(Runs on Hadoop cluster) writes into Hdfs > > OR only one agent is enough on Hadoop cluster > > > > Can someone help me in understanding how MQs can be integrated with flume > > > > Thanks, > > Chhaya > > >
