Again facing this with latest changes -

This time i am setting HADOOP_HOME=~/hadoop-0.20.2-cdh3u2
and starting flume agent - it's able to pick up all hadoop jars, logs look
fine.

however a telnet localhost <port> fails and so does client. Anything to do
with any latest changes?

- inder


On Wed, Apr 18, 2012 at 8:40 PM, Inder Pall <[email protected]> wrote:

> Agreed. Why isn't there any error that would definitely ease debugging.
>
> - inder
>
>
> On Wed, Apr 18, 2012 at 2:09 AM, Mike Percy <[email protected]> wrote:
>
>> Hmm yeah, we should definitely fail fast in this case.
>>
>> The correct way to enable support for HDFS sink needs to be documented
>> somewhere... but the right way to include the jars is to set HADOOP_HOME or
>> add the hadoop bin directory to your PATH in order to allow the flume-ng
>> script to find the right libs for you. The problem is that basically every
>> version of HDFS is not protocol compatible with any other version of
>> Hadoop, but the libs are (mostly) binary compatible. Hence, the use of the
>> technique that can be seen in flume-ng, of appending the output from
>> "hadoop classpath" to the Flume classpath at startup time, which seems to
>> be the lesser of many evils. Of course if you don't use HDFS sink then you
>> never have to worry about this.
>>
>> Best,
>> Mike
>>
>> On Apr 17, 2012, at 9:28 AM, Inder Pall wrote:
>>
>> > Mike & Brock,
>> >
>> > Yes FLUME-1093 is the issue. I checked out the latest source and
>> > hadoop-core*jar is missing from flume-ng-dist/target/flume-ng-1.2*/lib/
>> > after the build.
>> >
>> > After putting hadoop-core.jar it works.
>> >
>> > Thanks,
>> > - Inder
>> >
>> > On Tue, Apr 17, 2012 at 9:18 PM, Brock Noland <[email protected]>
>> wrote:
>> >
>> >> I wonder if the hadoop jars are not being added to your classpath:
>> >>
>> >> https://issues.apache.org/jira/browse/FLUME-1093
>> >>
>> >> Brock
>> >>
>> >> On Tue, Apr 17, 2012 at 3:23 PM, Inder Pall <[email protected]>
>> wrote:
>> >>> thanks, didn't knew that. Here is the log -
>> http://pastebin.com/TQQc9TWU
>> >>>
>> >>> Thanks,
>> >>> - Inder
>> >>>
>> >>> On Tue, Apr 17, 2012 at 8:37 PM, Brock Noland <[email protected]>
>> >> wrote:
>> >>>
>> >>>> The mailing list doesn't like attachements, yours was not included. I
>> >>>> would use pastebin or something similar.
>> >>>>
>> >>>> On Tue, Apr 17, 2012 at 3:04 PM, Inder Pall <[email protected]>
>> >> wrote:
>> >>>>> Brock,
>> >>>>>
>> >>>>> looking at the code and from past usage that wasn't an issue,
>> however
>> >> i
>> >>>>> tried the suggested change
>> >>>>> i.e. agent1.sinks.log-sink1.hdfs.path = /tmp/flume-data/ with no
>> luck.
>> >>>>>
>> >>>>> attaching flume.log with this email for reference. Server does
>> startup
>> >>>>> however telnet/connection fails.
>> >>>>> Weirdly if i change the sink type to logger it does work.
>> >>>>>
>> >>>>> Not sure if i am missing something obvious here.
>> >>>>>
>> >>>>> Thanks,
>> >>>>> - inder
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> On Tue, Apr 17, 2012 at 8:17 PM, Brock Noland <[email protected]>
>> >>>> wrote:
>> >>>>>>
>> >>>>>> Hi,
>> >>>>>>
>> >>>>>> inline
>> >>>>>>
>> >>>>>> On Tue, Apr 17, 2012 at 2:41 PM, Inder Pall <[email protected]>
>> >>>> wrote:
>> >>>>>>> folks,
>> >>>>>>>
>> >>>>>>> i am not sure whether anyone else is facing this or i am missing
>> >>>>>>> something
>> >>>>>>> here. Here are some observations -
>> >>>>>>>
>> >>>>>>> 1. checkout flume trunk
>> >>>>>>> 2. provide the following config
>> >>>>>>>
>> >>>>>>> # Define a memory channel called ch1 on agent1
>> >>>>>>> agent1.channels.ch1.type = memory
>> >>>>>>>
>> >>>>>>> # Define an Avro source called avro-source1 on agent1 and tell it
>> >>>>>>> # to bind to 0.0.0.0:41414. Connect it to channel ch1.
>> >>>>>>> agent1.sources.avro-source1.type = avro
>> >>>>>>> agent1.sources.avro-source1.bind = 0.0.0.0
>> >>>>>>> agent1.sources.avro-source1.port = 41414
>> >>>>>>
>> >>>>>> 1) Does localhost work?
>> >>>>>> 2) Can you paste the output of the following?
>> >>>>>>
>> >>>>>> sudo /usr/sbin/lsof -i tcp | grep 41414
>> >>>>>>
>> >>>>>>> agent1.sources.avro-source1.channels = ch1
>> >>>>>>>
>> >>>>>>> # Define a logger sink that simply logs all events it receives
>> >>>>>>> # and connect it to the other end of the same channel.
>> >>>>>>> agent1.sinks.log-sink1.type = hdfs
>> >>>>>>> agent1.sinks.log-sink1.channel = ch1
>> >>>>>>> agent1.sinks.log-sink1.hdfs.path = hdfs://localhost
>> >>>>>>
>> >>>>>> I think this should be something like /user/noland/
>> >>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> # Finally, now that we've defined all of our components, tell
>> >>>>>>> # agent1 which ones we want to activate.
>> >>>>>>> agent1.sources = avro-source1
>> >>>>>>> agent1.sinks = log-sink1
>> >>>>>>> agent1.channels = ch1
>> >>>>>>> flume.cfg (END)
>> >>>>>>>
>> >>>>>>> 3. start agent - ./flume-ng agent --conf ../conf/ -f flume.cfg -n
>> >>>> agent1
>> >>>>>>
>> >>>>>> Does flume.log tell you its getting started up?
>> >>>>>>
>> >>>>>> --
>> >>>>>> Apache MRUnit - Unit testing MapReduce -
>> >>>>>> http://incubator.apache.org/mrunit/
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> --
>> >>>>> Thanks,
>> >>>>> - Inder
>> >>>>>  Tech Platforms @Inmobi
>> >>>>>  Linkedin - http://goo.gl/eR4Ub
>> >>>>
>> >>>>
>> >>>>
>> >>>> --
>> >>>> Apache MRUnit - Unit testing MapReduce -
>> >>>> http://incubator.apache.org/mrunit/
>> >>>>
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Thanks,
>> >>> - Inder
>> >>> Tech Platforms @Inmobi
>> >>> Linkedin - http://goo.gl/eR4Ub
>> >>
>> >>
>> >>
>> >> --
>> >> Apache MRUnit - Unit testing MapReduce -
>> >> http://incubator.apache.org/mrunit/
>> >>
>> >
>> >
>> >
>> > --
>> > Thanks,
>> > - Inder
>> >  Tech Platforms @Inmobi
>> >  Linkedin - http://goo.gl/eR4Ub
>>
>>
>
>
> --
> Thanks,
> - Inder
>   Tech Platforms @Inmobi
>   Linkedin - http://goo.gl/eR4Ub
>



-- 
Thanks,
- Inder
  Tech Platforms @Inmobi
  Linkedin - http://goo.gl/eR4Ub

Reply via email to