Hi, we are getting the following error on one of our producers, does the
stack trace ring any bells for anyone?
2015-05-28 20:27:44 GMT - Failed to send messages
java.lang.IncompatibleClassChangeError
at
scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
rio should be rather rare though.
>
>
> Regards,
> Magnus
>
> 2015-05-12 20:18 GMT+02:00 Scott Chapman :
>
> > We are basically using kafka as a transport mechanism for multi-line log
> > files.
> >
> > So, for this we are using single partition topics (with a r
We are basically using kafka as a transport mechanism for multi-line log
files.
So, for this we are using single partition topics (with a replica for good
measure) writing to a multi-broker cluster.
Our producer basically reads a file line-by-line (as it is being written
to) and publishes each li
:28 PM, "Scott Chapman" wrote:
>
> > Yea, however I don't get async behavior. When kafka is down the log
> blocks,
> > which is kinda nasty to my app.
> >
> > On Tue Feb 24 2015 at 2:27:09 PM Joe Stein wrote:
> >
> > > Producer type isn
e:
> >>
> >>> are you including
> >>> https://github.com/stealthly/scala-kafka/blob/master/build.gradle#L122
> >>> in your project?
> >>>
> >>> ~ Joe Stein
> >>> - - - - - - - - - - - - - - - - -
> >>>
>
dle#L122 in
> your project?
>
> ~ Joe Stein
> - - - - - - - - - - - - - - - - -
>
> http://www.stealth.ly
> - - - - - - - - - - - - - - - - -
>
> On Tue, Feb 24, 2015 at 2:02 PM, Scott Chapman
> wrote:
>
> > Yea, when I try to set type to async (exactly like t
www.stealth.ly
> - - - - - - - - - - - - - - - - -
>
> On Mon, Feb 23, 2015 at 4:42 PM, Alex Melville
> wrote:
>
> > ^^ I would really appreciate this as well. It's unclear how to get log4j
> > working with Kafka when you have no prior experience with log4j.
>
t;
> > On Mon, Feb 23, 2015 at 9:45 AM, Steven Schlansker <
> > sschlans...@opentable.com> wrote:
> >
> >> Here’s my attempt at a Logback version, should be fairly easily ported:
> >>
> >> https://github.com/opentable/otj-logging/blob/master/kafka/
I am just starting to use it and could use a little guidance. I was able to
get it working with 0.8.2 but am not clear on best practices for using it.
Anyway willing to help me out a bit? Got a few questions, like how to
protect applications from when kafka is down or unreachable.
It seems like a
We're running 0.8.2 at the moment, and now I think I understand the concept
of consumer groups and how to see their offsets.
It does appear that consumer groups periodically get deleted (not sure why).
My question is, what's the generaly lifecycle of a consumer group? I would
assume they hang aro
ses/threads/components that
> consume each topic, you'll find that it doesn't matter much either way. In
> that case I would probably go with individual groups for isolation.
>
> -Todd
>
>
> On Mon, Feb 16, 2015 at 3:30 PM, Scott Chapman
> wrote:
>
> >
We have several dozen topics, each with only one topic (replication factor
or 2).
We are wanting to launch console-consumer for these in a manner that will
support saving offsets (so they can resume where they left off if they need
to be restarted). And I know consumer groups is the mechanism for
encoder/decoder you use in producer / consumer)
>
> Gwen
>
> On Mon, Feb 9, 2015 at 6:23 AM, Scott Chapman
> wrote:
>
> > So, avoiding a bit of a long explanation on why I'm doing it this way...
> >
> > But essentially, I am trying to put multi-line message
So, avoiding a bit of a long explanation on why I'm doing it this way...
But essentially, I am trying to put multi-line messages into kafka and then
parse them in logstash.
What I think I am seeing in kafka (using console-consumer) is this:
"line 1 \nline 2 \nline 3\n"
Then when I get it into l
like to grok those values out.
Let me know if anything comes to mind.
Thanks!
-Scott
On Thu Jan 22 2015 at 10:00:21 AM Joseph Lawson wrote:
> Just trying to get everything in prior to the 1.5 release.
>
>
> From: Scott Chapman
>
st got merged into
> jruby-kafka and I'm working them up the chain to my logstash-kafka and then
> pass it to the logstash-input/output/-kafka plugin.
>
> ________
> From: Scott Chapman
> Sent: Wednesday, January 21, 2015 8:32 PM
> To: users@k
We are starting to use the new logstash-kafka plugin, and I am wondering if
it is possible to read multiple topics? Or do you need to create separate
logstashes for each topic to parse?
We are consuming multi-line logs from a service running on a bunch of
different hosts, so we address that by cre
jndi/rmi://localhost:/jmxrmi
always returns:
1421543777895
1421543779895
1421543781895
1421543783896
1421543785896
What am I missing?
On Sat Jan 17 2015 at 8:11:38 PM Scott Chapman wrote:
> Thanks, that second one might be material. I find that if I run without
> any arguments I get no o
0
> https://issues.apache.org/jira/browse/KAFKA-1679
>
> On Sun, Jan 18, 2015 at 3:12 AM, Scott Chapman
> wrote:
>
> > While I appreciate all the suggestions on other JMX related tools, my
> > question is really about the JMXTool included in and documented in Kafka
> > and
While I appreciate all the suggestions on other JMX related tools, my
question is really about the JMXTool included in and documented in Kafka
and how to use it to dump all the JMX data. I can get it to dump some
mbeans, so i know my config is working. But what I can't seem to do (which
is describe
also software as a service options too.
>
> /***
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop
> ********/
> On Jan 16, 2015 8:
I appologize in advance for a noob question, just getting started with
kafka, and trying to get JMX data from it.
So, I had thought that running the JXMTool with no arguments would dump all
the data, but it doesn't seem to return.
I do know I can query for a specific Mbean name seems to work. But
22 matches
Mail list logo