IncompatibleClassChangeError

2015-05-28 Thread Scott Chapman
Hi, we are getting the following error on one of our producers, does the
stack trace ring any bells for anyone?

2015-05-28 20:27:44 GMT - Failed to send messages
java.lang.IncompatibleClassChangeError
at 
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:33)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:233)
at scala.collection.mutable.HashMap.map(HashMap.scala:45)
at 
kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$groupMessagesToSet(DefaultEventHandler.scala:301)
at 
kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:104)
at 
kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
at 
scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:95)
at scala.collection.Iterator$class.foreach(Iterator.scala:772)
at 
scala.collection.mutable.HashTable$$anon$1.foreach(HashTable.scala:157)
at 
scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:190)
at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:45)
at scala.collection.mutable.HashMap.foreach(HashMap.scala:95)
at 
kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100)
at 
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
at kafka.producer.Producer.send(Producer.scala:76)
at kafka.javaapi.producer.Producer.send(Producer.scala:33)


Re: Could this be happening?

2015-05-12 Thread Scott Chapman
We are using the Java producer API (0.8.2.1 if I am not mistaken). We are
using producer type of sync though.

On Tue, May 12, 2015 at 3:50 PM Magnus Edenhill  wrote:

> Hi Scott,
>
> what producer client are you using?
>
> Reordering is possible in async producers in the case of temporary broker
> failures
> and the combination of request.required.acks != 0 and retries > 0.
>
> Consider the case where a producer has 20 messages in-flight to the broker,
> out of those
> messages # 1-10 fails due to some temporary failure (?) on the broker side,
> but message # 11-20 are accepted.
> When the producer receives error results from the broker for message # 1-10
> it will try to resend
> these 10 failed messages, that are now accepted, causing them to end up
> after message #20 in the log - thus reordered.
>
> This failure scenario should be rather rare though.
>
>
> Regards,
> Magnus
>
> 2015-05-12 20:18 GMT+02:00 Scott Chapman :
>
> > We are basically using kafka as a transport mechanism for multi-line log
> > files.
> >
> > So, for this we are using single partition topics (with a replica for
> good
> > measure) writing to a multi-broker cluster.
> >
> > Our producer basically reads a file line-by-line (as it is being written
> > to) and publishes each line as a message to the topic. We are also
> writing
> > as quickly as we can (not waiting for ACK).
> >
> > What I am seeing is occasionally the messages in the topic appear to be
> > slightly out of order when compared to the source file they were based
> on.
> >
> > I am wonder if this might happen when the producer switches brokers
> because
> > we are not waiting for the ACK before continuing to write.
> >
> > Does this make any sense??
> >
> > Thanks in advance!
> >
> > -Scott
> >
>


Could this be happening?

2015-05-12 Thread Scott Chapman
We are basically using kafka as a transport mechanism for multi-line log
files.

So, for this we are using single partition topics (with a replica for good
measure) writing to a multi-broker cluster.

Our producer basically reads a file line-by-line (as it is being written
to) and publishes each line as a message to the topic. We are also writing
as quickly as we can (not waiting for ACK).

What I am seeing is occasionally the messages in the topic appear to be
slightly out of order when compared to the source file they were based on.

I am wonder if this might happen when the producer switches brokers because
we are not waiting for the ACK before continuing to write.

Does this make any sense??

Thanks in advance!

-Scott


Re: Anyone using log4j Appender for Kafka?

2015-02-24 Thread Scott Chapman
nah, it is expected behavior for a synchronous call, it waits and timesout.
Sorry, should have been more specific.

I was really looking for async.

On Tue Feb 24 2015 at 3:56:19 PM Joe Stein  wrote:

> Sounds like https://issues.apache.org/jira/browse/KAFKA-1788 maybe
> On Feb 24, 2015 2:28 PM, "Scott Chapman"  wrote:
>
> > Yea, however I don't get async behavior. When kafka is down the log
> blocks,
> > which is kinda nasty to my app.
> >
> > On Tue Feb 24 2015 at 2:27:09 PM Joe Stein  wrote:
> >
> > > Producer type isn't needed anymore with the new producer so in the the
> > > logger properties just leave that out in 0.8.2 and it should work.
> > >
> > > On Tue, Feb 24, 2015 at 2:24 PM, Joe Stein 
> wrote:
> > >
> > > > Interesting, looks like a breaking change from 0.8.1
> > > > https://github.com/apache/kafka/blob/0.8.1/core/src/
> > > main/scala/kafka/producer/KafkaLog4jAppender.scala
> > > > to 0.8.2
> > > > https://github.com/apache/kafka/blob/0.8.2/core/src/
> > > main/scala/kafka/producer/KafkaLog4jAppender.scala
> > > >
> > > > On Tue, Feb 24, 2015 at 2:21 PM, Joe Stein 
> > wrote:
> > > >
> > > >> and kafka too :)
> > > >>
> > > >> On Tue, Feb 24, 2015 at 2:21 PM, Joe Stein 
> > > wrote:
> > > >>
> > > >>> are you including
> > > >>>
> > https://github.com/stealthly/scala-kafka/blob/master/build.gradle#L122
> > > >>> in your project?
> > > >>>
> > > >>> ~ Joe Stein
> > > >>> - - - - - - - - - - - - - - - - -
> > > >>>
> > > >>>   http://www.stealth.ly
> > > >>> - - - - - - - - - - - - - - - - -
> > > >>>
> > > >>> On Tue, Feb 24, 2015 at 2:02 PM, Scott Chapman <
> sc...@woofplanet.com
> > >
> > > >>> wrote:
> > > >>>
> > > >>>> Yea, when I try to set type to async (exactly like the example) I
> > get:
> > > >>>> log4j:WARN No such property [producerType] in
> > > >>>> kafka.producer.KafkaLog4jAppender.
> > > >>>>
> > > >>>> On Tue Feb 24 2015 at 1:35:54 PM Joe Stein 
> > > >>>> wrote:
> > > >>>>
> > > >>>> > Here is sample log4j.properties
> > > >>>> > https://github.com/stealthly/scala-kafka/blob/master/src/tes
> > > >>>> > t/resources/log4j.properties#L54-L67
> > > >>>> >
> > > >>>> > I _almost_ have always pulled the class
> > > >>>> > https://github.com/apache/kafka/blob/0.8.2/core/src/main/
> > > >>>> > scala/kafka/producer/KafkaLog4jAppender.scala
> > > >>>> > internal
> > > >>>> > to private repo and changed it as things came up... e.g.
> > > setSource(),
> > > >>>> > setTags() blah blah...
> > > >>>> >
> > > >>>> > Paul Otto has an open source version
> > > >>>> > https://github.com/potto007/kafka-appender-layout that you
> could
> > > try
> > > >>>> out
> > > >>>> > too that he built to tackle some of the layout things.
> > > >>>> >
> > > >>>> > ~ Joe Stein
> > > >>>> > - - - - - - - - - - - - - - - - -
> > > >>>> >
> > > >>>> >   http://www.stealth.ly
> > > >>>> > - - - - - - - - - - - - - - - - -
> > > >>>> >
> > > >>>> > On Mon, Feb 23, 2015 at 4:42 PM, Alex Melville <
> > amelvi...@g.hmc.edu
> > > >
> > > >>>> > wrote:
> > > >>>> >
> > > >>>> > > ^^ I would really appreciate this as well. It's unclear how to
> > get
> > > >>>> log4j
> > > >>>> > > working with Kafka when you have no prior experience with
> log4j.
> > > >>>> > >
> > > >>>> > > On Mon, Feb 23, 2015 at 4:39 AM, Scott Chapman <
> > > >>>> sc...@woofplanet.com>
> > > >>>> > > wrote:
> > > >>>> > >
> > > >>>> > > > Thanks. But we're using log

Re: Anyone using log4j Appender for Kafka?

2015-02-24 Thread Scott Chapman
Yea, however I don't get async behavior. When kafka is down the log blocks,
which is kinda nasty to my app.

On Tue Feb 24 2015 at 2:27:09 PM Joe Stein  wrote:

> Producer type isn't needed anymore with the new producer so in the the
> logger properties just leave that out in 0.8.2 and it should work.
>
> On Tue, Feb 24, 2015 at 2:24 PM, Joe Stein  wrote:
>
> > Interesting, looks like a breaking change from 0.8.1
> > https://github.com/apache/kafka/blob/0.8.1/core/src/
> main/scala/kafka/producer/KafkaLog4jAppender.scala
> > to 0.8.2
> > https://github.com/apache/kafka/blob/0.8.2/core/src/
> main/scala/kafka/producer/KafkaLog4jAppender.scala
> >
> > On Tue, Feb 24, 2015 at 2:21 PM, Joe Stein  wrote:
> >
> >> and kafka too :)
> >>
> >> On Tue, Feb 24, 2015 at 2:21 PM, Joe Stein 
> wrote:
> >>
> >>> are you including
> >>> https://github.com/stealthly/scala-kafka/blob/master/build.gradle#L122
> >>> in your project?
> >>>
> >>> ~ Joe Stein
> >>> - - - - - - - - - - - - - - - - -
> >>>
> >>>   http://www.stealth.ly
> >>> - - - - - - - - - - - - - - - - -
> >>>
> >>> On Tue, Feb 24, 2015 at 2:02 PM, Scott Chapman 
> >>> wrote:
> >>>
> >>>> Yea, when I try to set type to async (exactly like the example) I get:
> >>>> log4j:WARN No such property [producerType] in
> >>>> kafka.producer.KafkaLog4jAppender.
> >>>>
> >>>> On Tue Feb 24 2015 at 1:35:54 PM Joe Stein 
> >>>> wrote:
> >>>>
> >>>> > Here is sample log4j.properties
> >>>> > https://github.com/stealthly/scala-kafka/blob/master/src/tes
> >>>> > t/resources/log4j.properties#L54-L67
> >>>> >
> >>>> > I _almost_ have always pulled the class
> >>>> > https://github.com/apache/kafka/blob/0.8.2/core/src/main/
> >>>> > scala/kafka/producer/KafkaLog4jAppender.scala
> >>>> > internal
> >>>> > to private repo and changed it as things came up... e.g.
> setSource(),
> >>>> > setTags() blah blah...
> >>>> >
> >>>> > Paul Otto has an open source version
> >>>> > https://github.com/potto007/kafka-appender-layout that you could
> try
> >>>> out
> >>>> > too that he built to tackle some of the layout things.
> >>>> >
> >>>> > ~ Joe Stein
> >>>> > - - - - - - - - - - - - - - - - -
> >>>> >
> >>>> >   http://www.stealth.ly
> >>>> > - - - - - - - - - - - - - - - - -
> >>>> >
> >>>> > On Mon, Feb 23, 2015 at 4:42 PM, Alex Melville  >
> >>>> > wrote:
> >>>> >
> >>>> > > ^^ I would really appreciate this as well. It's unclear how to get
> >>>> log4j
> >>>> > > working with Kafka when you have no prior experience with log4j.
> >>>> > >
> >>>> > > On Mon, Feb 23, 2015 at 4:39 AM, Scott Chapman <
> >>>> sc...@woofplanet.com>
> >>>> > > wrote:
> >>>> > >
> >>>> > > > Thanks. But we're using log4j. I tried setting the type to async
> >>>> but it
> >>>> > > > generated a warning of no such field. Is there any real
> >>>> documentation
> >>>> > on
> >>>> > > > the log4j appender?
> >>>> > > >
> >>>> > > > On Mon Feb 23 2015 at 2:58:54 AM Steven Schlansker <
> >>>> > > > sschlans...@opentable.com> wrote:
> >>>> > > >
> >>>> > > > > We just configure our logback.xml to have two Appenders, an
> >>>> > > AsyncAppender
> >>>> > > > > -> KafkaAppender, and FileAppender (or ConsoleAppender as
> >>>> > appropriate).
> >>>> > > > >
> >>>> > > > > AsyncAppender removes more failure cases too, e.g. a health
> >>>> check
> >>>> > > hanging
> >>>> > > > > rather than returning rapidly could block you application.
> >>>> > > > >
> >>>> > > > > On Feb 22, 2015, at 11:26 PM, anthony musyoki <
> >>>> > > anthony.mu

Re: Anyone using log4j Appender for Kafka?

2015-02-24 Thread Scott Chapman
I'm including log4j-1.2.17, slf4j-api-1.7.6, slf4j-log4j12-1.6.1,
kafka-clients-0.8.2.0, scala-library-2.11.5, and kafka_2.11-0.8.2.0

(java app)

On Tue Feb 24 2015 at 2:23:40 PM Joe Stein  wrote:

> are you including
> https://github.com/stealthly/scala-kafka/blob/master/build.gradle#L122 in
> your project?
>
> ~ Joe Stein
> - - - - - - - - - - - - - - - - -
>
>   http://www.stealth.ly
> - - - - - - - - - - - - - - - - -
>
> On Tue, Feb 24, 2015 at 2:02 PM, Scott Chapman 
> wrote:
>
> > Yea, when I try to set type to async (exactly like the example) I get:
> > log4j:WARN No such property [producerType] in
> > kafka.producer.KafkaLog4jAppender.
> >
> > On Tue Feb 24 2015 at 1:35:54 PM Joe Stein  wrote:
> >
> > > Here is sample log4j.properties
> > > https://github.com/stealthly/scala-kafka/blob/master/src/tes
> > > t/resources/log4j.properties#L54-L67
> > >
> > > I _almost_ have always pulled the class
> > > https://github.com/apache/kafka/blob/0.8.2/core/src/main/
> > > scala/kafka/producer/KafkaLog4jAppender.scala
> > > internal
> > > to private repo and changed it as things came up... e.g. setSource(),
> > > setTags() blah blah...
> > >
> > > Paul Otto has an open source version
> > > https://github.com/potto007/kafka-appender-layout that you could try
> out
> > > too that he built to tackle some of the layout things.
> > >
> > > ~ Joe Stein
> > > - - - - - - - - - - - - - - - - -
> > >
> > >   http://www.stealth.ly
> > > - - - - - - - - - - - - - - - - -
> > >
> > > On Mon, Feb 23, 2015 at 4:42 PM, Alex Melville 
> > > wrote:
> > >
> > > > ^^ I would really appreciate this as well. It's unclear how to get
> > log4j
> > > > working with Kafka when you have no prior experience with log4j.
> > > >
> > > > On Mon, Feb 23, 2015 at 4:39 AM, Scott Chapman  >
> > > > wrote:
> > > >
> > > > > Thanks. But we're using log4j. I tried setting the type to async
> but
> > it
> > > > > generated a warning of no such field. Is there any real
> documentation
> > > on
> > > > > the log4j appender?
> > > > >
> > > > > On Mon Feb 23 2015 at 2:58:54 AM Steven Schlansker <
> > > > > sschlans...@opentable.com> wrote:
> > > > >
> > > > > > We just configure our logback.xml to have two Appenders, an
> > > > AsyncAppender
> > > > > > -> KafkaAppender, and FileAppender (or ConsoleAppender as
> > > appropriate).
> > > > > >
> > > > > > AsyncAppender removes more failure cases too, e.g. a health check
> > > > hanging
> > > > > > rather than returning rapidly could block you application.
> > > > > >
> > > > > > On Feb 22, 2015, at 11:26 PM, anthony musyoki <
> > > > anthony.musy...@gmail.com
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Theres also another one here.
> > > > > > >
> > > > > > > https://github.com/danielwegener/logback-kafka-appender.
> > > > > > >
> > > > > > > It has a fallback appender which might address the issue of
> Kafka
> > > > being
> > > > > > > un-available.
> > > > > > >
> > > > > > >
> > > > > > > On Mon, Feb 23, 2015 at 9:45 AM, Steven Schlansker <
> > > > > > > sschlans...@opentable.com> wrote:
> > > > > > >
> > > > > > >> Here’s my attempt at a Logback version, should be fairly
> easily
> > > > > ported:
> > > > > > >>
> > > > > > >> https://github.com/opentable/otj-logging/blob/master/kafka/
> > > > > > src/main/java/com/opentable/logging/KafkaAppender.java
> > > > > > >>
> > > > > > >> On Feb 22, 2015, at 1:36 PM, Scott Chapman <
> > sc...@woofplanet.com>
> > > > > > wrote:
> > > > > > >>
> > > > > > >>> I am just starting to use it and could use a little
> guidance. I
> > > was
> > > > > > able
> > > > > > >> to
> > > > > > >>> get it working with 0.8.2 but am not clear on best practices
> > for
> > > > > using
> > > > > > >> it.
> > > > > > >>>
> > > > > > >>> Anyway willing to help me out a bit? Got a few questions,
> like
> > > how
> > > > to
> > > > > > >>> protect applications from when kafka is down or unreachable.
> > > > > > >>>
> > > > > > >>> It seems like a great idea for being able to get logs from
> > > existing
> > > > > > >>> applications to be collected by kafka.
> > > > > > >>>
> > > > > > >>> Thanks in advance!
> > > > > > >>
> > > > > > >>
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Anyone using log4j Appender for Kafka?

2015-02-24 Thread Scott Chapman
Yea, when I try to set type to async (exactly like the example) I get:
log4j:WARN No such property [producerType] in
kafka.producer.KafkaLog4jAppender.

On Tue Feb 24 2015 at 1:35:54 PM Joe Stein  wrote:

> Here is sample log4j.properties
> https://github.com/stealthly/scala-kafka/blob/master/src/tes
> t/resources/log4j.properties#L54-L67
>
> I _almost_ have always pulled the class
> https://github.com/apache/kafka/blob/0.8.2/core/src/main/
> scala/kafka/producer/KafkaLog4jAppender.scala
> internal
> to private repo and changed it as things came up... e.g. setSource(),
> setTags() blah blah...
>
> Paul Otto has an open source version
> https://github.com/potto007/kafka-appender-layout that you could try out
> too that he built to tackle some of the layout things.
>
> ~ Joe Stein
> - - - - - - - - - - - - - - - - -
>
>   http://www.stealth.ly
> - - - - - - - - - - - - - - - - -
>
> On Mon, Feb 23, 2015 at 4:42 PM, Alex Melville 
> wrote:
>
> > ^^ I would really appreciate this as well. It's unclear how to get log4j
> > working with Kafka when you have no prior experience with log4j.
> >
> > On Mon, Feb 23, 2015 at 4:39 AM, Scott Chapman 
> > wrote:
> >
> > > Thanks. But we're using log4j. I tried setting the type to async but it
> > > generated a warning of no such field. Is there any real documentation
> on
> > > the log4j appender?
> > >
> > > On Mon Feb 23 2015 at 2:58:54 AM Steven Schlansker <
> > > sschlans...@opentable.com> wrote:
> > >
> > > > We just configure our logback.xml to have two Appenders, an
> > AsyncAppender
> > > > -> KafkaAppender, and FileAppender (or ConsoleAppender as
> appropriate).
> > > >
> > > > AsyncAppender removes more failure cases too, e.g. a health check
> > hanging
> > > > rather than returning rapidly could block you application.
> > > >
> > > > On Feb 22, 2015, at 11:26 PM, anthony musyoki <
> > anthony.musy...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Theres also another one here.
> > > > >
> > > > > https://github.com/danielwegener/logback-kafka-appender.
> > > > >
> > > > > It has a fallback appender which might address the issue of Kafka
> > being
> > > > > un-available.
> > > > >
> > > > >
> > > > > On Mon, Feb 23, 2015 at 9:45 AM, Steven Schlansker <
> > > > > sschlans...@opentable.com> wrote:
> > > > >
> > > > >> Here’s my attempt at a Logback version, should be fairly easily
> > > ported:
> > > > >>
> > > > >> https://github.com/opentable/otj-logging/blob/master/kafka/
> > > > src/main/java/com/opentable/logging/KafkaAppender.java
> > > > >>
> > > > >> On Feb 22, 2015, at 1:36 PM, Scott Chapman 
> > > > wrote:
> > > > >>
> > > > >>> I am just starting to use it and could use a little guidance. I
> was
> > > > able
> > > > >> to
> > > > >>> get it working with 0.8.2 but am not clear on best practices for
> > > using
> > > > >> it.
> > > > >>>
> > > > >>> Anyway willing to help me out a bit? Got a few questions, like
> how
> > to
> > > > >>> protect applications from when kafka is down or unreachable.
> > > > >>>
> > > > >>> It seems like a great idea for being able to get logs from
> existing
> > > > >>> applications to be collected by kafka.
> > > > >>>
> > > > >>> Thanks in advance!
> > > > >>
> > > > >>
> > > >
> > > >
> > >
> >
>


Re: Anyone using log4j Appender for Kafka?

2015-02-23 Thread Scott Chapman
Thanks. But we're using log4j. I tried setting the type to async but it
generated a warning of no such field. Is there any real documentation on
the log4j appender?

On Mon Feb 23 2015 at 2:58:54 AM Steven Schlansker <
sschlans...@opentable.com> wrote:

> We just configure our logback.xml to have two Appenders, an AsyncAppender
> -> KafkaAppender, and FileAppender (or ConsoleAppender as appropriate).
>
> AsyncAppender removes more failure cases too, e.g. a health check hanging
> rather than returning rapidly could block you application.
>
> On Feb 22, 2015, at 11:26 PM, anthony musyoki 
> wrote:
>
> > Theres also another one here.
> >
> > https://github.com/danielwegener/logback-kafka-appender.
> >
> > It has a fallback appender which might address the issue of Kafka being
> > un-available.
> >
> >
> > On Mon, Feb 23, 2015 at 9:45 AM, Steven Schlansker <
> > sschlans...@opentable.com> wrote:
> >
> >> Here’s my attempt at a Logback version, should be fairly easily ported:
> >>
> >> https://github.com/opentable/otj-logging/blob/master/kafka/
> src/main/java/com/opentable/logging/KafkaAppender.java
> >>
> >> On Feb 22, 2015, at 1:36 PM, Scott Chapman 
> wrote:
> >>
> >>> I am just starting to use it and could use a little guidance. I was
> able
> >> to
> >>> get it working with 0.8.2 but am not clear on best practices for using
> >> it.
> >>>
> >>> Anyway willing to help me out a bit? Got a few questions, like how to
> >>> protect applications from when kafka is down or unreachable.
> >>>
> >>> It seems like a great idea for being able to get logs from existing
> >>> applications to be collected by kafka.
> >>>
> >>> Thanks in advance!
> >>
> >>
>
>


Anyone using log4j Appender for Kafka?

2015-02-22 Thread Scott Chapman
I am just starting to use it and could use a little guidance. I was able to
get it working with 0.8.2 but am not clear on best practices for using it.

Anyway willing to help me out a bit? Got a few questions, like how to
protect applications from when kafka is down or unreachable.

It seems like a great idea for being able to get logs from existing
applications to be collected by kafka.

Thanks in advance!


Need help understanding consumer group management.

2015-02-17 Thread Scott Chapman
We're running 0.8.2 at the moment, and now I think I understand the concept
of consumer groups and how to see their offsets.

It does appear that consumer groups periodically get deleted (not sure why).

My question is, what's the generaly lifecycle of a consumer group? I would
assume they hang around until someone deletes them.

And is there a way to get a list of consumer groups? And what's the proper
way to delete and what's the proper way to reset?


Re: Need to understand consumer groups.

2015-02-17 Thread Scott Chapman
I think I found my answer reading the 0.8.2 release notes about the changes
to consumer groups; basically they are triples (group, topic, partition)
for the key. That's all I needed to know.

Thanks!

(but got a different follow up question coming!)

On Tue Feb 17 2015 at 9:34:29 AM Todd Palino  wrote:

> I'm assuming from your description here that all of these topics are being
> consumed by a single consumer (i.e. a single process that does something
> different with each topic it sees). In general, you're going to get more
> efficiency out of a single consumer instance that consumes multiple topics
> than you will out of multiple consumers that each consume a single topic.
> Which means that you should go with a single consumer group to describe the
> topics consumed by a single consumer.
>
> If, on the other hand, you have separate processes/threads/components that
> consume each topic, you'll find that it doesn't matter much either way. In
> that case I would probably go with individual groups for isolation.
>
> -Todd
>
>
> On Mon, Feb 16, 2015 at 3:30 PM, Scott Chapman 
> wrote:
>
> > We have several dozen topics, each with only one topic (replication
> factor
> > or 2).
> >
> > We are wanting to launch console-consumer for these in a manner that will
> > support saving offsets (so they can resume where they left off if they
> need
> > to be restarted). And I know consumer groups is the mechanism for doing
> > that.
> >
> > My question is, should we use a single consumer-group for all the console
> > consumers (we are launching one for each topic) or should be be
> generating
> > topic-specific consumer groups?
> >
> > Thanks in advance!
> >
> > -Scott.
> >
>


Need to understand consumer groups.

2015-02-16 Thread Scott Chapman
We have several dozen topics, each with only one topic (replication factor
or 2).

We are wanting to launch console-consumer for these in a manner that will
support saving offsets (so they can resume where they left off if they need
to be restarted). And I know consumer groups is the mechanism for doing
that.

My question is, should we use a single consumer-group for all the console
consumers (we are launching one for each topic) or should be be generating
topic-specific consumer groups?

Thanks in advance!

-Scott.


Re: Handling multi-line messages?

2015-02-09 Thread Scott Chapman
Yea, I think I figured it out. Didn't realize the person doing the test
created the message using the console-consumer, so I think the newline was
escaped.

On Mon Feb 09 2015 at 11:59:57 AM Gwen Shapira 
wrote:

> Since the console-consumer seems to display strings correctly, it sounds
> like an issue with LogStash parser. Perhaps you'll have better luck asking
> on LogStash mailing list?
>
> Kafka just stores the bytes you put in and gives the same bytes out when
> you read messages. There's no parsing or encoding done in Kafka itself
> (other than the encoder/decoder you use in producer / consumer)
>
> Gwen
>
> On Mon, Feb 9, 2015 at 6:23 AM, Scott Chapman 
> wrote:
>
> > So, avoiding a bit of a long explanation on why I'm doing it this way...
> >
> > But essentially, I am trying to put multi-line messages into kafka and
> then
> > parse them in logstash.
> >
> > What I think I am seeing in kafka (using console-consumer) is this:
> >  "line 1 \nline 2 \nline 3\n"
> >
> > Then when I get it into logstash I am seeing it as:
> >{
> > "message" => "line 1 \\nline 2 \\nline \n",
> > "@version" => "1",
> > "@timestamp" => "2015-02-09T13:55:36.566Z",
> >   }
> >
> > My question is, is this what I should expect? I think I can probably
> figure
> > out take the single line and break it apart in logstash. But do I need
> to?
> >
> > Any thoughts?
> >
>


Handling multi-line messages?

2015-02-09 Thread Scott Chapman
So, avoiding a bit of a long explanation on why I'm doing it this way...

But essentially, I am trying to put multi-line messages into kafka and then
parse them in logstash.

What I think I am seeing in kafka (using console-consumer) is this:
 "line 1 \nline 2 \nline 3\n"

Then when I get it into logstash I am seeing it as:
   {
"message" => "line 1 \\nline 2 \\nline \n",
"@version" => "1",
"@timestamp" => "2015-02-09T13:55:36.566Z",
  }

My question is, is this what I should expect? I think I can probably figure
out take the single line and break it apart in logstash. But do I need to?

Any thoughts?


Re: using the new logstash-kafka plugin

2015-01-22 Thread Scott Chapman
Hey Joe, with other input types (like file) one can reference things like
the path in the filter section.

Is it possible to refer to the topic_id in the filter section? I tried and
nothing obvious worked.

We are encoding a few things (like host name, and type) in the name of the
topic, and would like to grok those values out.

Let me know if anything comes to mind.

Thanks!

-Scott

On Thu Jan 22 2015 at 10:00:21 AM Joseph Lawson  wrote:

> Just trying to get everything in prior to the 1.5 release.
>
> 
> From: Scott Chapman 
> Sent: Thursday, January 22, 2015 9:32 AM
> To: users@kafka.apache.org
> Subject: Re: using the new logstash-kafka plugin
>
> Awesome, what release are you targeting? Or are you able to make updates to
> the plugin outside of kafka?
>
> On Thu Jan 22 2015 at 9:31:26 AM Joseph Lawson 
> wrote:
>
> > Scott you will have to do just one topic per input right now but multiple
> > topics per group, whitelisting and blacklisting just got merged into
> > jruby-kafka and I'm working them up the chain to my logstash-kafka and
> then
> > pass it to the logstash-input/output/-kafka plugin.
> >
> > 
> > From: Scott Chapman 
> > Sent: Wednesday, January 21, 2015 8:32 PM
> > To: users@kafka.apache.org
> > Subject: using the new logstash-kafka plugin
> >
> > We are starting to use the new logstash-kafka plugin, and I am wondering
> if
> > it is possible to read multiple topics? Or do you need to create separate
> > logstashes for each topic to parse?
> >
> > We are consuming multi-line logs from a service running on a bunch of
> > different hosts, so we address that by creating single partition topics
> for
> > our producers.
> >
> > We then want to have logstash consume them for ELK.
> >
> > Thanks in advance!
> >
>


Re: using the new logstash-kafka plugin

2015-01-22 Thread Scott Chapman
Awesome, what release are you targeting? Or are you able to make updates to
the plugin outside of kafka?

On Thu Jan 22 2015 at 9:31:26 AM Joseph Lawson  wrote:

> Scott you will have to do just one topic per input right now but multiple
> topics per group, whitelisting and blacklisting just got merged into
> jruby-kafka and I'm working them up the chain to my logstash-kafka and then
> pass it to the logstash-input/output/-kafka plugin.
>
> ________
> From: Scott Chapman 
> Sent: Wednesday, January 21, 2015 8:32 PM
> To: users@kafka.apache.org
> Subject: using the new logstash-kafka plugin
>
> We are starting to use the new logstash-kafka plugin, and I am wondering if
> it is possible to read multiple topics? Or do you need to create separate
> logstashes for each topic to parse?
>
> We are consuming multi-line logs from a service running on a bunch of
> different hosts, so we address that by creating single partition topics for
> our producers.
>
> We then want to have logstash consume them for ELK.
>
> Thanks in advance!
>


using the new logstash-kafka plugin

2015-01-21 Thread Scott Chapman
We are starting to use the new logstash-kafka plugin, and I am wondering if
it is possible to read multiple topics? Or do you need to create separate
logstashes for each topic to parse?

We are consuming multi-line logs from a service running on a bunch of
different hosts, so we address that by creating single partition topics for
our producers.

We then want to have logstash consume them for ELK.

Thanks in advance!


Re: dumping JMX data

2015-01-17 Thread Scott Chapman
So, related question.

If I query for a specific object name, I always seem to get UNIX time:
./bin/kafka-run-class.sh kafka.tools.JmxTool --object-name
'"kafka.server":name="UnderReplicatedPartitions",type="ReplicaManager"'
--jmx-url service:jmx:rmi:///jndi/rmi://localhost:/jmxrmi

always returns:
1421543777895
1421543779895
1421543781895
1421543783896
1421543785896

What am I missing?

On Sat Jan 17 2015 at 8:11:38 PM Scott Chapman  wrote:

> Thanks, that second one might be material. I find that if I run without
> any arguments I get no output and it just keeps running. *sigh*
>
> On Sat Jan 17 2015 at 7:58:52 PM Manikumar Reddy 
> wrote:
>
>> JIRAs related to the issue are
>>
>> https://issues.apache.org/jira/browse/KAFKA-1680
>> https://issues.apache.org/jira/browse/KAFKA-1679
>>
>> On Sun, Jan 18, 2015 at 3:12 AM, Scott Chapman 
>> wrote:
>>
>> > While I appreciate all the suggestions on other JMX related tools, my
>> > question is really about the JMXTool included in and documented in Kafka
>> > and how to use it to dump all the JMX data. I can get it to dump some
>> > mbeans, so i know my config is working. But what I can't seem to do
>> (which
>> > is described in the documentation) is to dump all attributes of all
>> > objects.
>> >
>> > Please, anyone using it have any experience it that might be able to
>> help
>> > me?
>> >
>> > Thanks in advance!
>> >
>> > On Sat Jan 17 2015 at 12:39:56 PM Albert Strasheim 
>> > wrote:
>> >
>> > > On Fri, Jan 16, 2015 at 5:52 PM, Joe Stein 
>> wrote:
>> > > > Here are some more tools for that
>> > > > https://cwiki.apache.org/confluence/display/KAFKA/JMX+Reporters
>> > > depending
>> > > > on what you have in place and what you are trying todo different
>> > options
>> > > > exist.
>> > > >
>> > > > A lot of folks like JMX Trans.
>> > >
>> > > We tried JMX Trans for a while, but didn't like it very much.
>> > >
>> > > Jolokia looks promising. Trying that now.
>> > >
>> > > http://www.jolokia.org/
>> > >
>> >
>>
>


Re: dumping JMX data

2015-01-17 Thread Scott Chapman
Thanks, that second one might be material. I find that if I run without any
arguments I get no output and it just keeps running. *sigh*

On Sat Jan 17 2015 at 7:58:52 PM Manikumar Reddy 
wrote:

> JIRAs related to the issue are
>
> https://issues.apache.org/jira/browse/KAFKA-1680
> https://issues.apache.org/jira/browse/KAFKA-1679
>
> On Sun, Jan 18, 2015 at 3:12 AM, Scott Chapman 
> wrote:
>
> > While I appreciate all the suggestions on other JMX related tools, my
> > question is really about the JMXTool included in and documented in Kafka
> > and how to use it to dump all the JMX data. I can get it to dump some
> > mbeans, so i know my config is working. But what I can't seem to do
> (which
> > is described in the documentation) is to dump all attributes of all
> > objects.
> >
> > Please, anyone using it have any experience it that might be able to help
> > me?
> >
> > Thanks in advance!
> >
> > On Sat Jan 17 2015 at 12:39:56 PM Albert Strasheim 
> > wrote:
> >
> > > On Fri, Jan 16, 2015 at 5:52 PM, Joe Stein 
> wrote:
> > > > Here are some more tools for that
> > > > https://cwiki.apache.org/confluence/display/KAFKA/JMX+Reporters
> > > depending
> > > > on what you have in place and what you are trying todo different
> > options
> > > > exist.
> > > >
> > > > A lot of folks like JMX Trans.
> > >
> > > We tried JMX Trans for a while, but didn't like it very much.
> > >
> > > Jolokia looks promising. Trying that now.
> > >
> > > http://www.jolokia.org/
> > >
> >
>


Re: dumping JMX data

2015-01-17 Thread Scott Chapman
While I appreciate all the suggestions on other JMX related tools, my
question is really about the JMXTool included in and documented in Kafka
and how to use it to dump all the JMX data. I can get it to dump some
mbeans, so i know my config is working. But what I can't seem to do (which
is described in the documentation) is to dump all attributes of all objects.

Please, anyone using it have any experience it that might be able to help
me?

Thanks in advance!

On Sat Jan 17 2015 at 12:39:56 PM Albert Strasheim 
wrote:

> On Fri, Jan 16, 2015 at 5:52 PM, Joe Stein  wrote:
> > Here are some more tools for that
> > https://cwiki.apache.org/confluence/display/KAFKA/JMX+Reporters
> depending
> > on what you have in place and what you are trying todo different options
> > exist.
> >
> > A lot of folks like JMX Trans.
>
> We tried JMX Trans for a while, but didn't like it very much.
>
> Jolokia looks promising. Trying that now.
>
> http://www.jolokia.org/
>


Re: dumping JMX data

2015-01-16 Thread Scott Chapman
Thanks, I actually ran into those already. I was hoping just to be able to
dump the JMX data plain and simple. I can consume it with other tools but I
am mostly just trying to get the metrics in some format... any format.

I have some limitations on what I can build/run, so hoping I can just
leverage what is already there...

On Fri Jan 16 2015 at 8:54:38 PM Joe Stein  wrote:

> Here are some more tools for that
> https://cwiki.apache.org/confluence/display/KAFKA/JMX+Reporters depending
> on what you have in place and what you are trying todo different options
> exist.
>
> A lot of folks like JMX Trans.
>
> My favorite quick out of the box is using
> https://github.com/airbnb/kafka-statsd-metrics2 and sending to
> https://github.com/kamon-io/docker-grafana-graphite you can quickly chart
> and see everything going on.
>
> There are also software as a service options too.
>
> /***
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop
> ********/
> On Jan 16, 2015 8:42 PM, "Scott Chapman"  wrote:
>
> > I appologize in advance for a noob question, just getting started with
> > kafka, and trying to get JMX data from it.
> >
> > So, I had thought that running the JXMTool with no arguments would dump
> all
> > the data, but it doesn't seem to return.
> >
> > I do know I can query for a specific Mbean name seems to work. But I was
> > hoping to dump everything.
> >
> > I had a hard time finding any examples of using JMXTool, hoping someone
> > with some experience might be able to point me in the right direction.
> >
> > Thanks in advance!
> >
>


dumping JMX data

2015-01-16 Thread Scott Chapman
I appologize in advance for a noob question, just getting started with
kafka, and trying to get JMX data from it.

So, I had thought that running the JXMTool with no arguments would dump all
the data, but it doesn't seem to return.

I do know I can query for a specific Mbean name seems to work. But I was
hoping to dump everything.

I had a hard time finding any examples of using JMXTool, hoping someone
with some experience might be able to point me in the right direction.

Thanks in advance!