Re: OOM errors

2016-11-29 Thread Guozhang Wang
Where does the "20 minutes" come from? I thought the "aggregate" operator
in your

stream->aggregate->filter->foreach

topology is not a windowed aggregation, so the aggregate results will keep
accumulating.


Guozhang


On Tue, Nov 29, 2016 at 8:40 PM, Jon Yeargers 
wrote:

> "keep increasing" - why? It seems (to me) that the aggregates should be 20
> minutes long. After that the memory should be released.
>
> Not true?
>
> On Tue, Nov 29, 2016 at 3:53 PM, Guozhang Wang  wrote:
>
> > Jon,
> >
> > Note that in your "aggregate" function, if it is now windowed aggregate
> > then the aggregation results will keep increasing in your local state
> > stores unless you're pretty sure that the aggregate key space is bounded.
> > This is not only related to disk space but also memory since the current
> > default persistent state store, RocksDB, takes its own block cache both
> on
> > heap and out of heap.
> >
> > In addition, RocksDB has a write / space amplification. That is, if your
> > estimate size of the aggregated state store takes 1GB, it will actually
> > take 1 * max-amplification-factor in worst case, and similarly for block
> > cache and write buffer.
> >
> >
> > Guozhang
> >
> > On Tue, Nov 29, 2016 at 12:25 PM, Jon Yeargers  >
> > wrote:
> >
> > > App eventually got OOM-killed. Consumed 53G of swap space.
> > >
> > > Does it require a different GC? Some extra settings for the java cmd
> > line?
> > >
> > > On Tue, Nov 29, 2016 at 12:05 PM, Jon Yeargers <
> jon.yearg...@cedexis.com
> > >
> > > wrote:
> > >
> > > >  I cloned/built 10.2.0-SNAPSHOT
> > > >
> > > > App hasn't been OOM-killed yet but it's up to 66% mem.
> > > >
> > > > App takes > 10 min to start now. Needless to say this is problematic.
> > > >
> > > > The 'kafka-streams' scratch space has consumed 37G and still
> climbing.
> > > >
> > > >
> > > > On Tue, Nov 29, 2016 at 10:48 AM, Jon Yeargers <
> > jon.yearg...@cedexis.com
> > > >
> > > > wrote:
> > > >
> > > >> Does every broker need to be updated or just my client app(s)?
> > > >>
> > > >> On Tue, Nov 29, 2016 at 10:46 AM, Matthias J. Sax <
> > > matth...@confluent.io>
> > > >> wrote:
> > > >>
> > > >>> What version do you use?
> > > >>>
> > > >>> There is a memory leak in the latest version 0.10.1.0. The bug got
> > > >>> already fixed in trunk and 0.10.1 branch.
> > > >>>
> > > >>> There is already a discussion about a 0.10.1.1 bug fix release. For
> > > now,
> > > >>> you could build the Kafka Streams from the sources by yourself.
> > > >>>
> > > >>> -Matthias
> > > >>>
> > > >>>
> > > >>> On 11/29/16 10:30 AM, Jon Yeargers wrote:
> > > >>> > My KStreams app seems to be having some memory issues.
> > > >>> >
> > > >>> > 1. I start it `java -Xmx8G -jar .jar`
> > > >>> >
> > > >>> > 2. Wait 5-10 minutes - see lots of 'org.apache.zookeeper.
> > ClientCnxn
> > > -
> > > >>> Got
> > > >>> > ping response for sessionid: 0xc58abee3e13 after 0ms'
> messages
> > > >>> >
> > > >>> > 3. When it _finally_ starts reading values it typically goes for
> a
> > > >>> minute
> > > >>> > or so, reads a few thousand values and then the OS kills it with
> > 'Out
> > > >>> of
> > > >>> > memory' error.
> > > >>> >
> > > >>> > The topology is (essentially):
> > > >>> >
> > > >>> > stream->aggregate->filter->foreach
> > > >>> >
> > > >>> > It's reading values and creating a rolling average.
> > > >>> >
> > > >>> > During phase 2 (above) I see lots of IO wait and the 'scratch'
> > buffer
> > > >>> > (usually "/tmp/kafka-streams/appname") fills with 10s of Gb of
> .. ?
> > > (I
> > > >>> had
> > > >>> > to create a special scratch partition with 100Gb of space as
> kafka
> > > >>> would
> > > >>> > fill the / partition and make the system v v unhappy)
> > > >>> >
> > > >>> > Have I misconfigured something?
> > > >>> >
> > > >>>
> > > >>>
> > > >>
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang


Re: OOM errors

2016-11-29 Thread Jon Yeargers
"keep increasing" - why? It seems (to me) that the aggregates should be 20
minutes long. After that the memory should be released.

Not true?

On Tue, Nov 29, 2016 at 3:53 PM, Guozhang Wang  wrote:

> Jon,
>
> Note that in your "aggregate" function, if it is now windowed aggregate
> then the aggregation results will keep increasing in your local state
> stores unless you're pretty sure that the aggregate key space is bounded.
> This is not only related to disk space but also memory since the current
> default persistent state store, RocksDB, takes its own block cache both on
> heap and out of heap.
>
> In addition, RocksDB has a write / space amplification. That is, if your
> estimate size of the aggregated state store takes 1GB, it will actually
> take 1 * max-amplification-factor in worst case, and similarly for block
> cache and write buffer.
>
>
> Guozhang
>
> On Tue, Nov 29, 2016 at 12:25 PM, Jon Yeargers 
> wrote:
>
> > App eventually got OOM-killed. Consumed 53G of swap space.
> >
> > Does it require a different GC? Some extra settings for the java cmd
> line?
> >
> > On Tue, Nov 29, 2016 at 12:05 PM, Jon Yeargers  >
> > wrote:
> >
> > >  I cloned/built 10.2.0-SNAPSHOT
> > >
> > > App hasn't been OOM-killed yet but it's up to 66% mem.
> > >
> > > App takes > 10 min to start now. Needless to say this is problematic.
> > >
> > > The 'kafka-streams' scratch space has consumed 37G and still climbing.
> > >
> > >
> > > On Tue, Nov 29, 2016 at 10:48 AM, Jon Yeargers <
> jon.yearg...@cedexis.com
> > >
> > > wrote:
> > >
> > >> Does every broker need to be updated or just my client app(s)?
> > >>
> > >> On Tue, Nov 29, 2016 at 10:46 AM, Matthias J. Sax <
> > matth...@confluent.io>
> > >> wrote:
> > >>
> > >>> What version do you use?
> > >>>
> > >>> There is a memory leak in the latest version 0.10.1.0. The bug got
> > >>> already fixed in trunk and 0.10.1 branch.
> > >>>
> > >>> There is already a discussion about a 0.10.1.1 bug fix release. For
> > now,
> > >>> you could build the Kafka Streams from the sources by yourself.
> > >>>
> > >>> -Matthias
> > >>>
> > >>>
> > >>> On 11/29/16 10:30 AM, Jon Yeargers wrote:
> > >>> > My KStreams app seems to be having some memory issues.
> > >>> >
> > >>> > 1. I start it `java -Xmx8G -jar .jar`
> > >>> >
> > >>> > 2. Wait 5-10 minutes - see lots of 'org.apache.zookeeper.
> ClientCnxn
> > -
> > >>> Got
> > >>> > ping response for sessionid: 0xc58abee3e13 after 0ms'  messages
> > >>> >
> > >>> > 3. When it _finally_ starts reading values it typically goes for a
> > >>> minute
> > >>> > or so, reads a few thousand values and then the OS kills it with
> 'Out
> > >>> of
> > >>> > memory' error.
> > >>> >
> > >>> > The topology is (essentially):
> > >>> >
> > >>> > stream->aggregate->filter->foreach
> > >>> >
> > >>> > It's reading values and creating a rolling average.
> > >>> >
> > >>> > During phase 2 (above) I see lots of IO wait and the 'scratch'
> buffer
> > >>> > (usually "/tmp/kafka-streams/appname") fills with 10s of Gb of .. ?
> > (I
> > >>> had
> > >>> > to create a special scratch partition with 100Gb of space as kafka
> > >>> would
> > >>> > fill the / partition and make the system v v unhappy)
> > >>> >
> > >>> > Have I misconfigured something?
> > >>> >
> > >>>
> > >>>
> > >>
> > >
> >
>
>
>
> --
> -- Guozhang
>


Behavior of Mismatched Configs in Single Cluster

2016-11-29 Thread Alan Braithwaite
Hey all,

I'm curious what the behavior of mismatched configs across brokers is
supposed to be throughout the kafka cluster.  For example, if we don't have
the setting for auto re-election of leaders enabled but if we were to
enable it, what would be the behavior of the setting between the time we
restart the first kafka broker and the last?

I searched the docs but couldn't find anything concrete (sorry if it's
there somewhere, please point me in the direction).

I'm currently assuming that the behavior of the config is dependent on what
it actually does.  But it's not clear what's the case either way.

Thanks!
- Alan


Additional tutorial for Kafka

2016-11-29 Thread Ervin Varga
Hi Guys,

I would like to share with you an information about my book "Creating 
Maintainable APIs" (visit http://www.apress.com/9781484221952). I had devoted 
Part III of the book to messaging APIs, where I talk about Apache Kafka 
(chapters 12 and 13), and how it nicely interplays with Apache Avro. The book 
is accompanied by a full source code available at GitHub (see the previous 
link).

I hope you will find this information as well as the book’s content useful!

Thanks,
Ervin

Re: KStream window - end value out of bounds

2016-11-29 Thread Guozhang Wang
I agree that we should fix the "end timestamp" in windows after calling
WindowedDeserializer, created
https://issues.apache.org/jira/browse/KAFKA-4468 for it.

As for Jon's observed issue that some records seem aggregated into
incorrect windows, we are interested in the observed behavior that was
unexpected?


Guozhang

On Tue, Nov 29, 2016 at 11:11 AM, Jon Yeargers 
wrote:

> Seems straightforward enough: I have a 'foreach' after my windowed
> aggregation and I see values like these come out:
>
> (window) start: 148044420  end: 148044540
>
>  (record) epoch='1480433282000'
>
>
> If I have a 20 minute window with a 1 minute 'step' I will see my record
> come out of the aggregation 20x - with different window start/end.
>
> On Tue, Nov 29, 2016 at 11:06 AM, Eno Thereska 
> wrote:
>
> > Let us know if we can help with that, what problems are you seeing with
> > records in wrong windows?
> >
> > Eno
> >
> > > On 29 Nov 2016, at 19:02, Jon Yeargers 
> wrote:
> > >
> > > I've been having problems with records appearing in windows that they
> > > clearly don't belong to. Was curious whether this was related but it
> > seems
> > > not. Bummer.
> > >
> > > On Tue, Nov 29, 2016 at 8:52 AM, Eno Thereska 
> > > wrote:
> > >
> > >> Hi Jon,
> > >>
> > >> There is an optimization in org.apache.kafka.streams.
> kstream.internals.
> > >> WindowedSerializer/Deserializer where we don't encode and decode the
> > end
> > >> of the window since the user can always calculate it. So instead we
> > return
> > >> a default of Long.MAX_VALUE, which is the big number you see.
> > >>
> > >> In other words, use window().start() but not window().end() in this
> > case.
> > >> If you want to print both, just add the window size to
> window().start().
> > >>
> > >> Thanks
> > >> Eno
> > >>> On 29 Nov 2016, at 16:17, Jon Yeargers 
> > wrote:
> > >>>
> > >>> Using the following topology:
> > >>>
> > >>> KStream kStream =
> > >>> kStreamBuilder.stream(stringSerde,transSerde,TOPIC);
> > >>>   KTable ktAgg =
> > >>> kStream.groupByKey().aggregate(
> > >>>   SumRecordCollector::new,
> > >>>   new Aggregate(),
> > >>>   TimeWindows.of(20 * 60 * 1000).advanceBy(60 * 1000),
> > >>>   cSerde, "table_stream");
> > >>>
> > >>>
> > >>> When looking at windows as follows:
> > >>>
> > >>>  ktAgg.toStream().foreach((postKey, postValue) -> {
> > >>>LOGGER.debug("start: {}   end: {}",
> > >>> postkey.window().start(), postkey.window().end());
> > >>>  }
> > >>>
> > >>> The 'start' values are coming through properly incremented but the
> > 'end'
> > >>> values are all 9223372036854775807.
> > >>>
> > >>> Is there something wrong with my topology? Some other bug that would
> > >> cause
> > >>> this?
> > >>
> > >>
> >
> >
>



-- 
-- Guozhang


Re: kafka 0.10.1 ProcessorTopologyTestDriver issue with .map and .groupByKey.count

2016-11-29 Thread Guozhang Wang
Hamid,

Could you paste your code using KStreamDriver that does not have this issue
into the JIRA as well? I suspect KStreamDriver should have the same issue
and wondering why it did not.

Guozhang

On Tue, Nov 29, 2016 at 10:38 AM, Matthias J. Sax 
wrote:

> Thanks!
>
> On 11/29/16 7:18 AM, Hamidreza Afzali wrote:
> > I have created a JIRA issue:
> >
> > https://issues.apache.org/jira/browse/KAFKA-4461
> >
> >
> > Hamid
> >
>
>


-- 
-- Guozhang


Re: OOM errors

2016-11-29 Thread Guozhang Wang
Jon,

Note that in your "aggregate" function, if it is now windowed aggregate
then the aggregation results will keep increasing in your local state
stores unless you're pretty sure that the aggregate key space is bounded.
This is not only related to disk space but also memory since the current
default persistent state store, RocksDB, takes its own block cache both on
heap and out of heap.

In addition, RocksDB has a write / space amplification. That is, if your
estimate size of the aggregated state store takes 1GB, it will actually
take 1 * max-amplification-factor in worst case, and similarly for block
cache and write buffer.


Guozhang

On Tue, Nov 29, 2016 at 12:25 PM, Jon Yeargers 
wrote:

> App eventually got OOM-killed. Consumed 53G of swap space.
>
> Does it require a different GC? Some extra settings for the java cmd line?
>
> On Tue, Nov 29, 2016 at 12:05 PM, Jon Yeargers 
> wrote:
>
> >  I cloned/built 10.2.0-SNAPSHOT
> >
> > App hasn't been OOM-killed yet but it's up to 66% mem.
> >
> > App takes > 10 min to start now. Needless to say this is problematic.
> >
> > The 'kafka-streams' scratch space has consumed 37G and still climbing.
> >
> >
> > On Tue, Nov 29, 2016 at 10:48 AM, Jon Yeargers  >
> > wrote:
> >
> >> Does every broker need to be updated or just my client app(s)?
> >>
> >> On Tue, Nov 29, 2016 at 10:46 AM, Matthias J. Sax <
> matth...@confluent.io>
> >> wrote:
> >>
> >>> What version do you use?
> >>>
> >>> There is a memory leak in the latest version 0.10.1.0. The bug got
> >>> already fixed in trunk and 0.10.1 branch.
> >>>
> >>> There is already a discussion about a 0.10.1.1 bug fix release. For
> now,
> >>> you could build the Kafka Streams from the sources by yourself.
> >>>
> >>> -Matthias
> >>>
> >>>
> >>> On 11/29/16 10:30 AM, Jon Yeargers wrote:
> >>> > My KStreams app seems to be having some memory issues.
> >>> >
> >>> > 1. I start it `java -Xmx8G -jar .jar`
> >>> >
> >>> > 2. Wait 5-10 minutes - see lots of 'org.apache.zookeeper.ClientCnxn
> -
> >>> Got
> >>> > ping response for sessionid: 0xc58abee3e13 after 0ms'  messages
> >>> >
> >>> > 3. When it _finally_ starts reading values it typically goes for a
> >>> minute
> >>> > or so, reads a few thousand values and then the OS kills it with 'Out
> >>> of
> >>> > memory' error.
> >>> >
> >>> > The topology is (essentially):
> >>> >
> >>> > stream->aggregate->filter->foreach
> >>> >
> >>> > It's reading values and creating a rolling average.
> >>> >
> >>> > During phase 2 (above) I see lots of IO wait and the 'scratch' buffer
> >>> > (usually "/tmp/kafka-streams/appname") fills with 10s of Gb of .. ?
> (I
> >>> had
> >>> > to create a special scratch partition with 100Gb of space as kafka
> >>> would
> >>> > fill the / partition and make the system v v unhappy)
> >>> >
> >>> > Have I misconfigured something?
> >>> >
> >>>
> >>>
> >>
> >
>



-- 
-- Guozhang


Re: Disadvantages of Upgrading Kafka server without upgrading client libraries?

2016-11-29 Thread Ismael Juma
That's right, there should be no performance penalty if the broker is
configured to use the older message format. The downside is that timestamps
introduced in message format version 2 won't be supported in that case.

Ismael

On Tue, Nov 29, 2016 at 11:31 PM, Hans Jespersen  wrote:

> The performance impact of upgrading and some settings you can use to
> mitigate this impact when the majority of your clients are still 0.8.x are
> documented on the Apache Kafka website
> https://kafka.apache.org/documentation#upgrade_10_performance_impact
>
> -hans
>
> /**
>  * Hans Jespersen, Principal Systems Engineer, Confluent Inc.
>  * h...@confluent.io (650)924-2670
>  */
>
> On Tue, Nov 29, 2016 at 3:27 PM, Apurva Mehta  wrote:
>
> > I may be wrong, but since there have been message format changes between
> > 0.8.2 and 0.10.1, there will be a performance penalty if the clients are
> > not also upgraded. This is because you lose the zero-copy semantics on
> the
> > server side as the messages have to be converted to the old format before
> > being sent out on the wire to the old clients.
> >
> > On Tue, Nov 29, 2016 at 10:06 AM, Thomas Becker 
> wrote:
> >
> > > The only obvious downside I'm aware of is not being able to benefit
> > > from the bugfixes in the client. We are essentially doing the same
> > > thing; we upgraded the broker side to 0.10.0.0 but have yet to upgrade
> > > our clients from 0.8.1.x.
> > >
> > > On Tue, 2016-11-29 at 09:30 -0500, Tim Visher wrote:
> > > > Hi Everyone,
> > > >
> > > > I have an install of Kafka 0.8.2.1 which I'm upgrading to 0.10.1.0. I
> > > > see
> > > > that Kafka 0.10.1.0 should be backwards compatible with client
> > > > libraries
> > > > written for older versions but that newer client libraries are only
> > > > compatible with their version and up.
> > > >
> > > > My question is what disadvantages would there be to never upgrading
> > > > the
> > > > clients? I'm mainly asking because it would be advantageous to save
> > > > some
> > > > time here with a little technical debt if the costs weren't too high.
> > > > If
> > > > there are major issues then I can take on the client upgrade as well.
> > > >
> > > > Thanks in advance!
> > > >
> > > > --
> > > >
> > > > In Christ,
> > > >
> > > > Timmy V.
> > > >
> > > > http://blog.twonegatives.com/
> > > > http://five.sentenc.es/ -- Spend less time on mail
> > > --
> > >
> > >
> > > Tommy Becker
> > >
> > > Senior Software Engineer
> > >
> > > O +1 919.460.4747
> > >
> > > tivo.com
> > >
> > >
> > > 
> > >
> > > This email and any attachments may contain confidential and privileged
> > > material for the sole use of the intended recipient. Any review,
> copying,
> > > or distribution of this email (or any attachments) by others is
> > prohibited.
> > > If you are not the intended recipient, please contact the sender
> > > immediately and permanently delete this email and any attachments. No
> > > employee or agent of TiVo Inc. is authorized to conclude any binding
> > > agreement on behalf of TiVo Inc. by email. Binding agreements with TiVo
> > > Inc. may only be made by a signed written agreement.
> > >
> >
>


Re: Kafka Streaming

2016-11-29 Thread Guozhang Wang
Hello Mohit,

I'm copy-pasting Mathieu's previous email on making Streams to work on
Windows, note it was for 0.10.0.1 but I think the process should be very
similar.



To any who follow in my footsteps, here is my trail:

   1. Upgrade to at least Kafka Streams 0.10.0.1 (currently only in RC).
  - This is necessary because .1 bumps the rocksdb dependency to 4.8.0,
  where the previous 4.4 dependency did not yet support loading a
Windows JNI
  library.
  2. Follow the instructions here (
   https://github.com/facebook/rocksdb/blob/v4.8/CMakeLists.txt) to build
   rocksdb.
  - That link is for the v4.8 tag; be sure to match this with the
  version of rocksdb that Kafka Streams depends upon in the
future.  0.10.0.1
  -> v4.8, but future releases of Kafka Streams will likely depend on
newer
  versions.
  - Be sure to include the "-DJNI=1" compile option to build the JNI
  wrapper.
  - None of the third-party dependencies (eg. snappy, jemalloc, etc.)
  seem to be required to get something functional, but, it
probably isn't the
  most efficient build.
  - Ensure that "javac" and "javah" are on your path when running
  cmake; if there are any errors in the cmake output your JNI wrapper
  probably won't build.
  - You may or may not need to make minor patches to make the project
  build in Windows.  It appears that the Windows build is often broken;
for
  example, I had to apply this patch:
  https://github.com/facebook/rocksdb/pull/1223/files
  - Phew, that should give you a build\java\Debug\rocksdbjni.dll.  So
  close to the summit... just a horrible hack left...
   3. Copy rocksdbjni.dll to librocksdbjni-win64.dll.
   4. Insert librocksdbjni-win64.dll into your rocksdbjni-4.8.0.jar.
  - Ugh, this is the horrible hack.  rocksdbjni seems to only look
  inside its own jar for its native libraries, so, this is where
it needs to
  be.
  - If you're a gradle user on Windows, you'd find this jar file
  in C:\Users\...your-windows-user...\.gradle\caches\modules-2\
files-2.1\org.rocksdb\rocksdbjni\4.8.0\b543fc4ea5b52ad790730dee376ba0
df06d9f5f7.

And there you go, almost a working Kafka Streams app in Windows.  One other
detail is that the default state storage directory doesn't seem to be
created on demand, so I had to mkdir C:\tmp\kafka-streams myself before my
app would work.



Currently we have not spent too much time running and maintaining Kafka
Streams in Windows yet, but we'd love to hear your experience and if there
is any issues come up, we would like to help debugging and fixing them.


Guozhang



On Mon, Nov 28, 2016 at 5:28 PM, Mohit Anchlia 
wrote:

> I just cloned 3.1x and tried to run a test. I am still seeing rocksdb
> error:
>
> Exception in thread "StreamThread-1" java.lang.UnsatisfiedLinkError:
> C:\Users\manchlia\AppData\Local\Temp\librocksdbjni108789031344273.dll:
> Can't find dependent libraries
>
>
> On Mon, Oct 24, 2016 at 11:26 AM, Matthias J. Sax 
> wrote:
>
> > -BEGIN PGP SIGNED MESSAGE-
> > Hash: SHA512
> >
> > It's a client issues... But CP 3.1 should be our in about 2 weeks...
> > Of course, you can use Kafka 0.10.1.0 for now. It was released last
> > week and does contain the fix.
> >
> > - -Matthias
> >
> > On 10/24/16 9:19 AM, Mohit Anchlia wrote:
> > > Would this be an issue if I connect to a remote Kafka instance
> > > running on the Linux box? Or is this a client issue. What's rockdb
> > > used for to keep state?
> > >
> > > On Mon, Oct 24, 2016 at 12:08 AM, Matthias J. Sax
> > >  wrote:
> > >
> > > Kafka 0.10.1.0 which was release last week does contain the fix
> > > already. The fix will be in CP 3.1 coming up soon!
> > >
> > > (sorry that I did mix up versions in a previous email)
> > >
> > > -Matthias
> > >
> > > On 10/23/16 12:10 PM, Mohit Anchlia wrote:
> >  So if I get it right I will not have this fix until 4
> >  months? Should I just create my own example with the next
> >  version of Kafka?
> > 
> >  On Sat, Oct 22, 2016 at 9:04 PM, Matthias J. Sax
> >   wrote:
> > 
> >  Current version is 3.0.1 CP 3.1 should be release the next
> >  weeks
> > 
> >  So CP 3.2 should be there is about 4 month (Kafka follows a
> >  time base release cycle of 4 month and CP usually aligns with
> >  Kafka releases)
> > 
> >  -Matthias
> > 
> > 
> >  On 10/20/16 5:10 PM, Mohit Anchlia wrote:
> > >>> Any idea of when 3.2 is coming?
> > >>>
> > >>> On Thu, Oct 20, 2016 at 4:53 PM, Matthias J. Sax
> > >>>  wrote:
> > >>>
> > >>> No problem. Asking questions is the purpose of mailing
> > >>> lists. :)
> > >>>
> > >>> The issue will be fixed in next version of examples
> > >>> branch.
> > >>>
> > >>> Examples branch is build 

Re: Disadvantages of Upgrading Kafka server without upgrading client libraries?

2016-11-29 Thread Hans Jespersen
The performance impact of upgrading and some settings you can use to
mitigate this impact when the majority of your clients are still 0.8.x are
documented on the Apache Kafka website
https://kafka.apache.org/documentation#upgrade_10_performance_impact

-hans

/**
 * Hans Jespersen, Principal Systems Engineer, Confluent Inc.
 * h...@confluent.io (650)924-2670
 */

On Tue, Nov 29, 2016 at 3:27 PM, Apurva Mehta  wrote:

> I may be wrong, but since there have been message format changes between
> 0.8.2 and 0.10.1, there will be a performance penalty if the clients are
> not also upgraded. This is because you lose the zero-copy semantics on the
> server side as the messages have to be converted to the old format before
> being sent out on the wire to the old clients.
>
> On Tue, Nov 29, 2016 at 10:06 AM, Thomas Becker  wrote:
>
> > The only obvious downside I'm aware of is not being able to benefit
> > from the bugfixes in the client. We are essentially doing the same
> > thing; we upgraded the broker side to 0.10.0.0 but have yet to upgrade
> > our clients from 0.8.1.x.
> >
> > On Tue, 2016-11-29 at 09:30 -0500, Tim Visher wrote:
> > > Hi Everyone,
> > >
> > > I have an install of Kafka 0.8.2.1 which I'm upgrading to 0.10.1.0. I
> > > see
> > > that Kafka 0.10.1.0 should be backwards compatible with client
> > > libraries
> > > written for older versions but that newer client libraries are only
> > > compatible with their version and up.
> > >
> > > My question is what disadvantages would there be to never upgrading
> > > the
> > > clients? I'm mainly asking because it would be advantageous to save
> > > some
> > > time here with a little technical debt if the costs weren't too high.
> > > If
> > > there are major issues then I can take on the client upgrade as well.
> > >
> > > Thanks in advance!
> > >
> > > --
> > >
> > > In Christ,
> > >
> > > Timmy V.
> > >
> > > http://blog.twonegatives.com/
> > > http://five.sentenc.es/ -- Spend less time on mail
> > --
> >
> >
> > Tommy Becker
> >
> > Senior Software Engineer
> >
> > O +1 919.460.4747
> >
> > tivo.com
> >
> >
> > 
> >
> > This email and any attachments may contain confidential and privileged
> > material for the sole use of the intended recipient. Any review, copying,
> > or distribution of this email (or any attachments) by others is
> prohibited.
> > If you are not the intended recipient, please contact the sender
> > immediately and permanently delete this email and any attachments. No
> > employee or agent of TiVo Inc. is authorized to conclude any binding
> > agreement on behalf of TiVo Inc. by email. Binding agreements with TiVo
> > Inc. may only be made by a signed written agreement.
> >
>


Re: Disadvantages of Upgrading Kafka server without upgrading client libraries?

2016-11-29 Thread Apurva Mehta
I may be wrong, but since there have been message format changes between
0.8.2 and 0.10.1, there will be a performance penalty if the clients are
not also upgraded. This is because you lose the zero-copy semantics on the
server side as the messages have to be converted to the old format before
being sent out on the wire to the old clients.

On Tue, Nov 29, 2016 at 10:06 AM, Thomas Becker  wrote:

> The only obvious downside I'm aware of is not being able to benefit
> from the bugfixes in the client. We are essentially doing the same
> thing; we upgraded the broker side to 0.10.0.0 but have yet to upgrade
> our clients from 0.8.1.x.
>
> On Tue, 2016-11-29 at 09:30 -0500, Tim Visher wrote:
> > Hi Everyone,
> >
> > I have an install of Kafka 0.8.2.1 which I'm upgrading to 0.10.1.0. I
> > see
> > that Kafka 0.10.1.0 should be backwards compatible with client
> > libraries
> > written for older versions but that newer client libraries are only
> > compatible with their version and up.
> >
> > My question is what disadvantages would there be to never upgrading
> > the
> > clients? I'm mainly asking because it would be advantageous to save
> > some
> > time here with a little technical debt if the costs weren't too high.
> > If
> > there are major issues then I can take on the client upgrade as well.
> >
> > Thanks in advance!
> >
> > --
> >
> > In Christ,
> >
> > Timmy V.
> >
> > http://blog.twonegatives.com/
> > http://five.sentenc.es/ -- Spend less time on mail
> --
>
>
> Tommy Becker
>
> Senior Software Engineer
>
> O +1 919.460.4747
>
> tivo.com
>
>
> 
>
> This email and any attachments may contain confidential and privileged
> material for the sole use of the intended recipient. Any review, copying,
> or distribution of this email (or any attachments) by others is prohibited.
> If you are not the intended recipient, please contact the sender
> immediately and permanently delete this email and any attachments. No
> employee or agent of TiVo Inc. is authorized to conclude any binding
> agreement on behalf of TiVo Inc. by email. Binding agreements with TiVo
> Inc. may only be made by a signed written agreement.
>


Re: log.dirs balance?

2016-11-29 Thread Karolis Pocius
It's difficult enough to balance kafka brokers with a single log 
directory, not to mention attempting to juggle multiple ones. While JBOD 
is great in terms of capacity, it's a pain in terms of management. After 
6 months of constant manual reassignments I ended up going with RAID1+0 
which is what LinkedIn uses as well as Confluent recommends.


Hats off to you if you manage to find a solution to this, just wanted to 
share my painful experience.



On 2016.11.29 21:35, Tim Visher wrote:

Hello,

My kafka deploy has 5 servers with 3 log disks each. Over the weekend I
noticed that on 2 of the 5 servers the partitions appear to be imbalanced
amongst the log.dirs.

```
kafka3
/var/lib/kafka/disk1
3
/var/lib/kafka/disk2
3
/var/lib/kafka/disk3
3
kafka5
/var/lib/kafka/disk1
3
/var/lib/kafka/disk2
4
/var/lib/kafka/disk3
2
kafka1
/var/lib/kafka/disk1
3
/var/lib/kafka/disk2
3
/var/lib/kafka/disk3
3
kafka4
/var/lib/kafka/disk1
4
/var/lib/kafka/disk2
2
/var/lib/kafka/disk3
3
kafka2
/var/lib/kafka/disk1
3
/var/lib/kafka/disk2
3
/var/lib/kafka/disk3
3
```

You can see that 5 and 4 are both unbalanced.

Is there a reason for that? The partitions themselves are pretty much
perfectly balanced, but the directory chosen for them is not.

Is this an anti-pattern to be using multiple log.dirs per server?

Thanks in advance!

--

In Christ,

Timmy V.

http://blog.twonegatives.com/
http://five.sentenc.es/ -- Spend less time on mail




Best Regards

Karolis Pocius
IT System Engineer

Email: karolis.poc...@adform.com
Mobile: +370 620 22108
Sporto g. 18, LT-09238 Vilnius, Lithuania

Disclaimer: 
The information contained in this message and attachments is intended 
solely for the attention and use of the named addressee and may be 
confidential. If you are not the intended recipient, you are reminded that 
the information remains the property of the sender. You must not use, 
disclose, distribute, copy, print or rely on this e-mail. If you have 
received this message in error, please contact the sender immediately and 
irrevocably delete this message and any copies.

Re: OOM errors

2016-11-29 Thread Jon Yeargers
App eventually got OOM-killed. Consumed 53G of swap space.

Does it require a different GC? Some extra settings for the java cmd line?

On Tue, Nov 29, 2016 at 12:05 PM, Jon Yeargers 
wrote:

>  I cloned/built 10.2.0-SNAPSHOT
>
> App hasn't been OOM-killed yet but it's up to 66% mem.
>
> App takes > 10 min to start now. Needless to say this is problematic.
>
> The 'kafka-streams' scratch space has consumed 37G and still climbing.
>
>
> On Tue, Nov 29, 2016 at 10:48 AM, Jon Yeargers 
> wrote:
>
>> Does every broker need to be updated or just my client app(s)?
>>
>> On Tue, Nov 29, 2016 at 10:46 AM, Matthias J. Sax 
>> wrote:
>>
>>> What version do you use?
>>>
>>> There is a memory leak in the latest version 0.10.1.0. The bug got
>>> already fixed in trunk and 0.10.1 branch.
>>>
>>> There is already a discussion about a 0.10.1.1 bug fix release. For now,
>>> you could build the Kafka Streams from the sources by yourself.
>>>
>>> -Matthias
>>>
>>>
>>> On 11/29/16 10:30 AM, Jon Yeargers wrote:
>>> > My KStreams app seems to be having some memory issues.
>>> >
>>> > 1. I start it `java -Xmx8G -jar .jar`
>>> >
>>> > 2. Wait 5-10 minutes - see lots of 'org.apache.zookeeper.ClientCnxn -
>>> Got
>>> > ping response for sessionid: 0xc58abee3e13 after 0ms'  messages
>>> >
>>> > 3. When it _finally_ starts reading values it typically goes for a
>>> minute
>>> > or so, reads a few thousand values and then the OS kills it with 'Out
>>> of
>>> > memory' error.
>>> >
>>> > The topology is (essentially):
>>> >
>>> > stream->aggregate->filter->foreach
>>> >
>>> > It's reading values and creating a rolling average.
>>> >
>>> > During phase 2 (above) I see lots of IO wait and the 'scratch' buffer
>>> > (usually "/tmp/kafka-streams/appname") fills with 10s of Gb of .. ? (I
>>> had
>>> > to create a special scratch partition with 100Gb of space as kafka
>>> would
>>> > fill the / partition and make the system v v unhappy)
>>> >
>>> > Have I misconfigured something?
>>> >
>>>
>>>
>>
>


Re: OOM errors

2016-11-29 Thread Jon Yeargers
 I cloned/built 10.2.0-SNAPSHOT

App hasn't been OOM-killed yet but it's up to 66% mem.

App takes > 10 min to start now. Needless to say this is problematic.

The 'kafka-streams' scratch space has consumed 37G and still climbing.


On Tue, Nov 29, 2016 at 10:48 AM, Jon Yeargers 
wrote:

> Does every broker need to be updated or just my client app(s)?
>
> On Tue, Nov 29, 2016 at 10:46 AM, Matthias J. Sax 
> wrote:
>
>> What version do you use?
>>
>> There is a memory leak in the latest version 0.10.1.0. The bug got
>> already fixed in trunk and 0.10.1 branch.
>>
>> There is already a discussion about a 0.10.1.1 bug fix release. For now,
>> you could build the Kafka Streams from the sources by yourself.
>>
>> -Matthias
>>
>>
>> On 11/29/16 10:30 AM, Jon Yeargers wrote:
>> > My KStreams app seems to be having some memory issues.
>> >
>> > 1. I start it `java -Xmx8G -jar .jar`
>> >
>> > 2. Wait 5-10 minutes - see lots of 'org.apache.zookeeper.ClientCnxn -
>> Got
>> > ping response for sessionid: 0xc58abee3e13 after 0ms'  messages
>> >
>> > 3. When it _finally_ starts reading values it typically goes for a
>> minute
>> > or so, reads a few thousand values and then the OS kills it with 'Out of
>> > memory' error.
>> >
>> > The topology is (essentially):
>> >
>> > stream->aggregate->filter->foreach
>> >
>> > It's reading values and creating a rolling average.
>> >
>> > During phase 2 (above) I see lots of IO wait and the 'scratch' buffer
>> > (usually "/tmp/kafka-streams/appname") fills with 10s of Gb of .. ? (I
>> had
>> > to create a special scratch partition with 100Gb of space as kafka would
>> > fill the / partition and make the system v v unhappy)
>> >
>> > Have I misconfigured something?
>> >
>>
>>
>


Re: Two possible issues with 2.11 0.10.1.0

2016-11-29 Thread Matthias J. Sax
Your issues seems to be similar to this one:
https://groups.google.com/forum/?utm_medium=email_source=footer#!msg/confluent-platform/i5cwYhpUtx4/i79mHiD6CgAJ

Can you confirm? Maybe you can try out the patch...


-Matthias

On 11/28/16 6:42 PM, Jon Yeargers wrote:
> Ran into these two internal exceptions today.  Likely they are known or my
> fault but I'm reporting them anyway:
> 
> Exception in thread "StreamThread-1" java.lang.NullPointerException
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:304)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)
> 
> 
> 
> and this one:
> 
> 
> Exception in thread "StreamThread-1"
> org.apache.kafka.streams.errors.StreamsException: stream-thread
> [StreamThread-1] Failed to rebalance
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:410)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242)
> 
> Caused by: org.apache.kafka.streams.errors.ProcessorStateException: Error
> opening store table_stream-20161129 at location
> /tmp/kafka-streams/BreakoutProcessor/0_0/table_stream/table_stream-20161129
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:196)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:158)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$Segment.openDB(RocksDBWindowStore.java:72)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBWindowStore.getOrCreateSegment(RocksDBWindowStore.java:402)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBWindowStore.putInternal(RocksDBWindowStore.java:333)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBWindowStore.access$100(RocksDBWindowStore.java:51)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBWindowStore$2.restore(RocksDBWindowStore.java:212)
> 
> at
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.restoreActiveState(ProcessorStateManager.java:235)
> 
> at
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.register(ProcessorStateManager.java:198)
> 
> at
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.register(ProcessorContextImpl.java:123)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBWindowStore.init(RocksDBWindowStore.java:206)
> 
> at
> org.apache.kafka.streams.state.internals.MeteredWindowStore.init(MeteredWindowStore.java:66)
> 
> at
> org.apache.kafka.streams.state.internals.CachingWindowStore.init(CachingWindowStore.java:64)
> 
> at
> org.apache.kafka.streams.processor.internals.AbstractTask.initializeStateStores(AbstractTask.java:81)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:120)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:633)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:660)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.access$100(StreamThread.java:69)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:124)
> 
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:228)
> 
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:313)
> 
> at
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:277)
> 
> at
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:259)
> 
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1013)
> 
> at
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)
> 
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:407)
> 
> ... 1 more
> 
> Caused by: org.rocksdb.RocksDBException: IO error: lock
> /tmp/kafka-streams/BreakoutProcessor/0_0/table_stream/table_stream-20161129/LOCK:
> No locks available
> 
> at org.rocksdb.RocksDB.open(Native Method)
> 
> at org.rocksdb.RocksDB.open(RocksDB.java:184)
> 
> at
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:189)
> 
> ... 26 more
> 



signature.asc
Description: OpenPGP digital signature


log.dirs balance?

2016-11-29 Thread Tim Visher
Hello,

My kafka deploy has 5 servers with 3 log disks each. Over the weekend I
noticed that on 2 of the 5 servers the partitions appear to be imbalanced
amongst the log.dirs.

```
kafka3
/var/lib/kafka/disk1
3
/var/lib/kafka/disk2
3
/var/lib/kafka/disk3
3
kafka5
/var/lib/kafka/disk1
3
/var/lib/kafka/disk2
4
/var/lib/kafka/disk3
2
kafka1
/var/lib/kafka/disk1
3
/var/lib/kafka/disk2
3
/var/lib/kafka/disk3
3
kafka4
/var/lib/kafka/disk1
4
/var/lib/kafka/disk2
2
/var/lib/kafka/disk3
3
kafka2
/var/lib/kafka/disk1
3
/var/lib/kafka/disk2
3
/var/lib/kafka/disk3
3
```

You can see that 5 and 4 are both unbalanced.

Is there a reason for that? The partitions themselves are pretty much
perfectly balanced, but the directory chosen for them is not.

Is this an anti-pattern to be using multiple log.dirs per server?

Thanks in advance!

--

In Christ,

Timmy V.

http://blog.twonegatives.com/
http://five.sentenc.es/ -- Spend less time on mail


Re: KStream window - end value out of bounds

2016-11-29 Thread Jon Yeargers
Seems straightforward enough: I have a 'foreach' after my windowed
aggregation and I see values like these come out:

(window) start: 148044420  end: 148044540

 (record) epoch='1480433282000'


If I have a 20 minute window with a 1 minute 'step' I will see my record
come out of the aggregation 20x - with different window start/end.

On Tue, Nov 29, 2016 at 11:06 AM, Eno Thereska 
wrote:

> Let us know if we can help with that, what problems are you seeing with
> records in wrong windows?
>
> Eno
>
> > On 29 Nov 2016, at 19:02, Jon Yeargers  wrote:
> >
> > I've been having problems with records appearing in windows that they
> > clearly don't belong to. Was curious whether this was related but it
> seems
> > not. Bummer.
> >
> > On Tue, Nov 29, 2016 at 8:52 AM, Eno Thereska 
> > wrote:
> >
> >> Hi Jon,
> >>
> >> There is an optimization in org.apache.kafka.streams.kstream.internals.
> >> WindowedSerializer/Deserializer where we don't encode and decode the
> end
> >> of the window since the user can always calculate it. So instead we
> return
> >> a default of Long.MAX_VALUE, which is the big number you see.
> >>
> >> In other words, use window().start() but not window().end() in this
> case.
> >> If you want to print both, just add the window size to window().start().
> >>
> >> Thanks
> >> Eno
> >>> On 29 Nov 2016, at 16:17, Jon Yeargers 
> wrote:
> >>>
> >>> Using the following topology:
> >>>
> >>> KStream kStream =
> >>> kStreamBuilder.stream(stringSerde,transSerde,TOPIC);
> >>>   KTable ktAgg =
> >>> kStream.groupByKey().aggregate(
> >>>   SumRecordCollector::new,
> >>>   new Aggregate(),
> >>>   TimeWindows.of(20 * 60 * 1000).advanceBy(60 * 1000),
> >>>   cSerde, "table_stream");
> >>>
> >>>
> >>> When looking at windows as follows:
> >>>
> >>>  ktAgg.toStream().foreach((postKey, postValue) -> {
> >>>LOGGER.debug("start: {}   end: {}",
> >>> postkey.window().start(), postkey.window().end());
> >>>  }
> >>>
> >>> The 'start' values are coming through properly incremented but the
> 'end'
> >>> values are all 9223372036854775807.
> >>>
> >>> Is there something wrong with my topology? Some other bug that would
> >> cause
> >>> this?
> >>
> >>
>
>


Re: KStream window - end value out of bounds

2016-11-29 Thread Eno Thereska
Let us know if we can help with that, what problems are you seeing with records 
in wrong windows?

Eno

> On 29 Nov 2016, at 19:02, Jon Yeargers  wrote:
> 
> I've been having problems with records appearing in windows that they
> clearly don't belong to. Was curious whether this was related but it seems
> not. Bummer.
> 
> On Tue, Nov 29, 2016 at 8:52 AM, Eno Thereska 
> wrote:
> 
>> Hi Jon,
>> 
>> There is an optimization in org.apache.kafka.streams.kstream.internals.
>> WindowedSerializer/Deserializer where we don't encode and decode the end
>> of the window since the user can always calculate it. So instead we return
>> a default of Long.MAX_VALUE, which is the big number you see.
>> 
>> In other words, use window().start() but not window().end() in this case.
>> If you want to print both, just add the window size to window().start().
>> 
>> Thanks
>> Eno
>>> On 29 Nov 2016, at 16:17, Jon Yeargers  wrote:
>>> 
>>> Using the following topology:
>>> 
>>> KStream kStream =
>>> kStreamBuilder.stream(stringSerde,transSerde,TOPIC);
>>>   KTable ktAgg =
>>> kStream.groupByKey().aggregate(
>>>   SumRecordCollector::new,
>>>   new Aggregate(),
>>>   TimeWindows.of(20 * 60 * 1000).advanceBy(60 * 1000),
>>>   cSerde, "table_stream");
>>> 
>>> 
>>> When looking at windows as follows:
>>> 
>>>  ktAgg.toStream().foreach((postKey, postValue) -> {
>>>LOGGER.debug("start: {}   end: {}",
>>> postkey.window().start(), postkey.window().end());
>>>  }
>>> 
>>> The 'start' values are coming through properly incremented but the 'end'
>>> values are all 9223372036854775807.
>>> 
>>> Is there something wrong with my topology? Some other bug that would
>> cause
>>> this?
>> 
>> 



Re: KStream window - end value out of bounds

2016-11-29 Thread Jon Yeargers
I've been having problems with records appearing in windows that they
clearly don't belong to. Was curious whether this was related but it seems
not. Bummer.

On Tue, Nov 29, 2016 at 8:52 AM, Eno Thereska 
wrote:

> Hi Jon,
>
> There is an optimization in org.apache.kafka.streams.kstream.internals.
> WindowedSerializer/Deserializer where we don't encode and decode the end
> of the window since the user can always calculate it. So instead we return
> a default of Long.MAX_VALUE, which is the big number you see.
>
> In other words, use window().start() but not window().end() in this case.
> If you want to print both, just add the window size to window().start().
>
> Thanks
> Eno
> > On 29 Nov 2016, at 16:17, Jon Yeargers  wrote:
> >
> > Using the following topology:
> >
> > KStream kStream =
> > kStreamBuilder.stream(stringSerde,transSerde,TOPIC);
> >KTable ktAgg =
> > kStream.groupByKey().aggregate(
> >SumRecordCollector::new,
> >new Aggregate(),
> >TimeWindows.of(20 * 60 * 1000).advanceBy(60 * 1000),
> >cSerde, "table_stream");
> >
> >
> > When looking at windows as follows:
> >
> >   ktAgg.toStream().foreach((postKey, postValue) -> {
> > LOGGER.debug("start: {}   end: {}",
> > postkey.window().start(), postkey.window().end());
> >   }
> >
> > The 'start' values are coming through properly incremented but the 'end'
> > values are all 9223372036854775807.
> >
> > Is there something wrong with my topology? Some other bug that would
> cause
> > this?
>
>


Re: OOM errors

2016-11-29 Thread Jon Yeargers
Does every broker need to be updated or just my client app(s)?

On Tue, Nov 29, 2016 at 10:46 AM, Matthias J. Sax 
wrote:

> What version do you use?
>
> There is a memory leak in the latest version 0.10.1.0. The bug got
> already fixed in trunk and 0.10.1 branch.
>
> There is already a discussion about a 0.10.1.1 bug fix release. For now,
> you could build the Kafka Streams from the sources by yourself.
>
> -Matthias
>
>
> On 11/29/16 10:30 AM, Jon Yeargers wrote:
> > My KStreams app seems to be having some memory issues.
> >
> > 1. I start it `java -Xmx8G -jar .jar`
> >
> > 2. Wait 5-10 minutes - see lots of 'org.apache.zookeeper.ClientCnxn -
> Got
> > ping response for sessionid: 0xc58abee3e13 after 0ms'  messages
> >
> > 3. When it _finally_ starts reading values it typically goes for a minute
> > or so, reads a few thousand values and then the OS kills it with 'Out of
> > memory' error.
> >
> > The topology is (essentially):
> >
> > stream->aggregate->filter->foreach
> >
> > It's reading values and creating a rolling average.
> >
> > During phase 2 (above) I see lots of IO wait and the 'scratch' buffer
> > (usually "/tmp/kafka-streams/appname") fills with 10s of Gb of .. ? (I
> had
> > to create a special scratch partition with 100Gb of space as kafka would
> > fill the / partition and make the system v v unhappy)
> >
> > Have I misconfigured something?
> >
>
>


Re: OOM errors

2016-11-29 Thread Matthias J. Sax
What version do you use?

There is a memory leak in the latest version 0.10.1.0. The bug got
already fixed in trunk and 0.10.1 branch.

There is already a discussion about a 0.10.1.1 bug fix release. For now,
you could build the Kafka Streams from the sources by yourself.

-Matthias


On 11/29/16 10:30 AM, Jon Yeargers wrote:
> My KStreams app seems to be having some memory issues.
> 
> 1. I start it `java -Xmx8G -jar .jar`
> 
> 2. Wait 5-10 minutes - see lots of 'org.apache.zookeeper.ClientCnxn - Got
> ping response for sessionid: 0xc58abee3e13 after 0ms'  messages
> 
> 3. When it _finally_ starts reading values it typically goes for a minute
> or so, reads a few thousand values and then the OS kills it with 'Out of
> memory' error.
> 
> The topology is (essentially):
> 
> stream->aggregate->filter->foreach
> 
> It's reading values and creating a rolling average.
> 
> During phase 2 (above) I see lots of IO wait and the 'scratch' buffer
> (usually "/tmp/kafka-streams/appname") fills with 10s of Gb of .. ? (I had
> to create a special scratch partition with 100Gb of space as kafka would
> fill the / partition and make the system v v unhappy)
> 
> Have I misconfigured something?
> 



signature.asc
Description: OpenPGP digital signature


Re: kafka 0.10.1 ProcessorTopologyTestDriver issue with .map and .groupByKey.count

2016-11-29 Thread Matthias J. Sax
Thanks!

On 11/29/16 7:18 AM, Hamidreza Afzali wrote:
> I have created a JIRA issue:
> 
> https://issues.apache.org/jira/browse/KAFKA-4461
> 
> 
> Hamid
> 



signature.asc
Description: OpenPGP digital signature


Re: Question about state store

2016-11-29 Thread Matthias J. Sax
A KStream-KTable join might also work.

> stream.join(table, ...);


-Matthias

On 11/29/16 1:21 AM, Eno Thereska wrote:
> Hi Simon,
> 
> See if this helps:
> 
> - you can create the KTable with the state store "storeAdminItem" as you 
> mentioned
> - for each element you want to check, you want to query that state store 
> directly to check if that element is in the state store. You can do the 
> querying via the Interactive Query APIs (they basically get a handle on the 
> state store and issue read-only requests to it). 
> - There is a blog with a detailed example and code in the end here: 
> https://www.confluent.io/blog/unifying-stream-processing-and-interactive-queries-in-apache-kafka/
>  
> 
> 
> Thanks
> Eno
> 
>> On 29 Nov 2016, at 05:51, Simon Teles  wrote:
>>
>> Hello all,
>>
>> I'm not sure how to do the following use-case, if anyone can help :)
>>
>> - I have an admin UI where the admin can choose wich item are allowed on the 
>> application
>>
>> - When the admin choose an item, it pushes an object to a topic kafka : 
>> test.admin.item with the following object {"id":"1234", "name":"toto"}
>>
>> - I have a Kafka Stream which is connected to a topic test.item which 
>> receive all the items update from our DB.
>>
>> - When the stream receives an item, it needs to check if its allowed by the 
>> admin. If yes, it saves it in on another DB and pushes a notification on 
>> another topic if the save on the DB is ok. If not, it does nothing.
>>
>> My idea is when i start the stream, i "push" all the contents from the topic 
>> test.admin.item to a state-store and when i receive a new item on test.item, 
>> i check its id against the state-store.
>>
>> Is this the proper way ?
>>
>> My problem is :
>>
>> -> if i use the TopologyBuilder, i don't know how can i load the topic on a 
>> state-store at start to after use it on a Processor ?
>>
>> -> With the KStreamBuilder i can use :
>>
>> KStreamBuilder builder = new KStreamBuilder();
>>
>> // We create a KTable from topic admin.item and load it on the Store 
>> "AdminItem"
>> builder.table(Serdes.String(), new SerdeItem(),"test.admin.item", 
>> "storeAdminItem");
>>
>> -> But with the KStreamBuilder, i don't how can i access the state-store 
>> when i map/filter/etc ?
>>
>> I you can help me figure it out, it would be much appreciated.
>>
>> Thanks,
>>
>> Regards,
>>
> 
> 



signature.asc
Description: OpenPGP digital signature


OOM errors

2016-11-29 Thread Jon Yeargers
My KStreams app seems to be having some memory issues.

1. I start it `java -Xmx8G -jar .jar`

2. Wait 5-10 minutes - see lots of 'org.apache.zookeeper.ClientCnxn - Got
ping response for sessionid: 0xc58abee3e13 after 0ms'  messages

3. When it _finally_ starts reading values it typically goes for a minute
or so, reads a few thousand values and then the OS kills it with 'Out of
memory' error.

The topology is (essentially):

stream->aggregate->filter->foreach

It's reading values and creating a rolling average.

During phase 2 (above) I see lots of IO wait and the 'scratch' buffer
(usually "/tmp/kafka-streams/appname") fills with 10s of Gb of .. ? (I had
to create a special scratch partition with 100Gb of space as kafka would
fill the / partition and make the system v v unhappy)

Have I misconfigured something?


One Kafka Broker Went Rogue

2016-11-29 Thread Thomas DeVoe
Hi,

I encountered a strange issue in our kafka cluster, where randomly a single
broker entered a state where it seemed to think it was the only broker in
the cluster (it shrank all of its ISRs to just existing on itself). Some
details about the kafka cluster:

- running in an EC2 VPC on AWS
- 3 nodes (d2.xlarge)
- Kafka version : 0.10.1.0

More information about the incident:

Around 19:57 yesterday, one of the nodes somehow lost its connection to the
cluster and started reporting messages like this for what seemed to be all
of its hosted topic partitions:

[2016-11-28 19:57:05,426] INFO Partition [arches_stage,0] on broker 1002:
> Shrinking ISR for partition [arches_stage,0] from 1003,1002,1001 to 1002
> (kafka.cluster.Partition)
> [2016-11-28 19:57:05,466] INFO Partition [connect-offsets,13] on broker
> 1002: Shrinking ISR for partition [connect-offsets,13] from 1003,1002,1001
> to 1002 (kafka.cluster.Partition)
> [2016-11-28 19:57:05,489] INFO Partition [lasagna_prod_memstore,2] on
> broker 1002: Shrinking ISR for partition [lasagna_prod_memstore,2] from
> 1003,1002,1001 to 1002 (kafka.cluster.Partition)
> ...
>

It then added the ISRs from the other machines back in:

[2016-11-28 19:57:18,013] INFO Partition [arches_stage,0] on broker 1002:
> Expanding ISR for partition [arches_stage,0] from 1002 to 1002,1003
> (kafka.cluster.Partition)
> [2016-11-28 19:57:18,015] INFO Partition [connect-offsets,13] on broker
> 1002: Expanding ISR for partition [connect-offsets,13] from 1002 to
> 1002,1003 (kafka.cluster.Partition)
> [2016-11-28 19:57:18,018] INFO Partition [lasagna_prod_memstore,2] on
> broker 1002: Expanding ISR for partition [lasagna_prod_memstore,2] from
> 1002 to 1002,1003 (kafka.cluster.Partition)
> ...
> [2016-11-28 19:57:18,222] INFO Partition [arches_stage,0] on broker 1002:
> Expanding ISR for partition [arches_stage,0] from 1002,1003 to
> 1002,1003,1001 (kafka.cluster.Partition)
> [2016-11-28 19:57:18,224] INFO Partition [connect-offsets,13] on broker
> 1002: Expanding ISR for partition [connect-offsets,13] from 1002,1003 to
> 1002,1003,1001 (kafka.cluster.Partition)
> [2016-11-28 19:57:18,227] INFO Partition [lasagna_prod_memstore,2] on
> broker 1002: Expanding ISR for partition [lasagna_prod_memstore,2] from
> 1002,1003 to 1002,1003,1001 (kafka.cluster.Partition)


and eventually removed them again before going on its merry way:

[2016-11-28 19:58:05,408] INFO Partition [arches_stage,0] on broker 1002:
> Shrinking ISR for partition [arches_stage,0] from 1002,1003,1001 to 1002
> (kafka.cluster.Partition)
> [2016-11-28 19:58:05,415] INFO Partition [connect-offsets,13] on broker
> 1002: Shrinking ISR for partition [connect-offsets,13] from 1002,1003,1001
> to 1002 (kafka.cluster.Partition)
> [2016-11-28 19:58:05,416] INFO Partition [lasagna_prod_memstore,2] on
> broker 1002: Shrinking ISR for partition [lasagna_prod_memstore,2] from
> 1002,1003,1001 to 1002 (kafka.cluster.Partition)


Node 1002 continued running from that point on normally (outside of the
fact that all of it's partitions were under replicated). Also there were no
WARN/ERROR before/after this.


The other two nodes were not so happy however, with both failing to connect
to via the ReplicaFetcherThread to the node in question. The reported this
around the same time as that error:

[2016-11-28 19:57:16,087] WARN [ReplicaFetcherThread-0-1002], Error in
> fetch kafka.server.ReplicaFetcherThread$FetchRequest@6eb44718
> (kafka.server.ReplicaFetcherThread)
> java.io.IOException: Connection to 1002 was disconnected before the
> response was read
> at
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:115)
> at
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1$$anonfun$apply$1.apply(NetworkClientBlockingOps.scala:112)
> at scala.Option.foreach(Option.scala:257)
> at
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:112)
> at
> kafka.utils.NetworkClientBlockingOps$$anonfun$blockingSendAndReceive$extension$1.apply(NetworkClientBlockingOps.scala:108)
> at
> kafka.utils.NetworkClientBlockingOps$.recursivePoll$1(NetworkClientBlockingOps.scala:137)
> at
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollContinuously$extension(NetworkClientBlockingOps.scala:143)
> at
> kafka.utils.NetworkClientBlockingOps$.blockingSendAndReceive$extension(NetworkClientBlockingOps.scala:108)
> at
> kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:253)
> at
> kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:238)
> at
> kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
> at
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:118)
> at
> 

Re: Disadvantages of Upgrading Kafka server without upgrading client libraries?

2016-11-29 Thread Thomas Becker
The only obvious downside I'm aware of is not being able to benefit
from the bugfixes in the client. We are essentially doing the same
thing; we upgraded the broker side to 0.10.0.0 but have yet to upgrade
our clients from 0.8.1.x.

On Tue, 2016-11-29 at 09:30 -0500, Tim Visher wrote:
> Hi Everyone,
>
> I have an install of Kafka 0.8.2.1 which I'm upgrading to 0.10.1.0. I
> see
> that Kafka 0.10.1.0 should be backwards compatible with client
> libraries
> written for older versions but that newer client libraries are only
> compatible with their version and up.
>
> My question is what disadvantages would there be to never upgrading
> the
> clients? I'm mainly asking because it would be advantageous to save
> some
> time here with a little technical debt if the costs weren't too high.
> If
> there are major issues then I can take on the client upgrade as well.
>
> Thanks in advance!
>
> --
>
> In Christ,
>
> Timmy V.
>
> http://blog.twonegatives.com/
> http://five.sentenc.es/ -- Spend less time on mail
--


Tommy Becker

Senior Software Engineer

O +1 919.460.4747

tivo.com




This email and any attachments may contain confidential and privileged material 
for the sole use of the intended recipient. Any review, copying, or 
distribution of this email (or any attachments) by others is prohibited. If you 
are not the intended recipient, please contact the sender immediately and 
permanently delete this email and any attachments. No employee or agent of TiVo 
Inc. is authorized to conclude any binding agreement on behalf of TiVo Inc. by 
email. Binding agreements with TiVo Inc. may only be made by a signed written 
agreement.


Re: Disadvantages of Upgrading Kafka server without upgrading client libraries?

2016-11-29 Thread Gwen Shapira
Most people upgrade clients to enjoy new client features, fix bugs or
improve performance. If none of these apply, no need to upgrade.

Since you are upgrading to 0.10.1.0, read the upgrade docs closely -
there are specific server settings regarding the message format that
you need to configure a certain way if the clients are not upgraded.

Gwen

On Tue, Nov 29, 2016 at 6:30 AM, Tim Visher  wrote:
> Hi Everyone,
>
> I have an install of Kafka 0.8.2.1 which I'm upgrading to 0.10.1.0. I see
> that Kafka 0.10.1.0 should be backwards compatible with client libraries
> written for older versions but that newer client libraries are only
> compatible with their version and up.
>
> My question is what disadvantages would there be to never upgrading the
> clients? I'm mainly asking because it would be advantageous to save some
> time here with a little technical debt if the costs weren't too high. If
> there are major issues then I can take on the client upgrade as well.
>
> Thanks in advance!
>
> --
>
> In Christ,
>
> Timmy V.
>
> http://blog.twonegatives.com/
> http://five.sentenc.es/ -- Spend less time on mail



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Re: KStream window - end value out of bounds

2016-11-29 Thread Eno Thereska
Hi Jon,

There is an optimization in 
org.apache.kafka.streams.kstream.internals.WindowedSerializer/Deserializer 
where we don't encode and decode the end of the window since the user can 
always calculate it. So instead we return a default of Long.MAX_VALUE, which is 
the big number you see.

In other words, use window().start() but not window().end() in this case. If 
you want to print both, just add the window size to window().start().

Thanks
Eno
> On 29 Nov 2016, at 16:17, Jon Yeargers  wrote:
> 
> Using the following topology:
> 
> KStream kStream =
> kStreamBuilder.stream(stringSerde,transSerde,TOPIC);
>KTable ktAgg =
> kStream.groupByKey().aggregate(
>SumRecordCollector::new,
>new Aggregate(),
>TimeWindows.of(20 * 60 * 1000).advanceBy(60 * 1000),
>cSerde, "table_stream");
> 
> 
> When looking at windows as follows:
> 
>   ktAgg.toStream().foreach((postKey, postValue) -> {
> LOGGER.debug("start: {}   end: {}",
> postkey.window().start(), postkey.window().end());
>   }
> 
> The 'start' values are coming through properly incremented but the 'end'
> values are all 9223372036854775807.
> 
> Is there something wrong with my topology? Some other bug that would cause
> this?



Relation between segment name and message offset, if any?

2016-11-29 Thread Harald Kirsch
I thought the name of a segment contains the offset of the first message 
in that segment, but I just saw offsets being processed that would map 
to a segment file that was cleaned and is listed as empty by dir.


This is 0.10.* on Windows.

Is there something strange going on with data still mapped but the file 
empty or does the relation between offset and name just not hold?


Thanks
Harald


Re: Messages intermittently get lost

2016-11-29 Thread Martin Gainty
Hi Zach


we dont know whats causing this intermittent problem..so lets Divide and 
Conquer each part of this problem individually starting at the source of the 
data feeds


Let us eliminate any potential problem with feeds from external sources


Once you verify the zookeeper feeds are 100% reliable lets move onto Kafka


Pingback when you have verifiable results from zookeeper feeds


Thanks

Martin
__




From: Zac Harvey 
Sent: Tuesday, November 29, 2016 10:46 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Does anybody have any idea why ZK might be to blame if messages sent by a Kafka 
producer fail to be received by a Kafka consumer?


From: Zac Harvey 
Sent: Monday, November 28, 2016 9:07:41 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Thanks Martin, I will look at those links.


But you seem to be 100% confident that the problem is with ZooKeeper...can I 
ask why? What is it about my problem description that makes you think this is 
an issue with ZooKeeper?


From: Martin Gainty 
Sent: Friday, November 25, 2016 1:46:28 PM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost




From: Zac Harvey 
Sent: Friday, November 25, 2016 6:17 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Hi Martin,


My server.properties looks like this:


listeners=PLAINTEXT://0.0.0.0:9092

advertised.host.name=

broker.id=2

port=9092

num.partitions=4

zookeeper.connect=zkA:2181,zkB:2181,zkC:2181

MG>can you check status for each ZK Node in the quorum?

sh>$ZOOKEEPER_HOME/bin/zkServer.sh status

http://www.ibm.com/support/knowledgecenter/SSCRJU_4.0.0/com.ibm.streams.pd.doc/doc/containerstreamszookeeper.html
ZooKeeper problems and solutions - 
IBM
www.ibm.com
Use these solutions to resolve the problems that you might encounter with 
Apache ZooKeeper.




ZooKeeper problems and solutions - 
IBM
ZooKeeper problems and solutions - 
IBM
www.ibm.com
Use these solutions to resolve the problems that you might encounter with 
Apache ZooKeeper.



www.ibm.com
[http://upload.wikimedia.org/wikipedia/commons/thumb/5/51/IBM_logo.svg/200px-IBM_logo.svg.png]

IBM - United States
www.ibm.com
For more than a century IBM has been dedicated to every client's success and to 
creating innovations that matter for the world



Use these solutions to resolve the problems that you might encounter with 
Apache ZooKeeper.



MG>*greetings from Plimoth Mass*

MG>M


num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

log.dirs=/tmp/kafka-logs

num.recovery.threads.per.data.dir=1

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=30

zookeeper.connection.timeout.ms=6000

delete.topic.enable=true

auto.leader.rebalance.enable=true


Above, 'zkA', 'zkB' and 'zkC' are defined in /etc/hosts and are valid ZK 
servers, and  is the public DNS of the EC2 (AWS) node that 
this Kafka is running on.


Anything look incorrect to you?


And yes, yesterday was a holiday, but there is only work! I'll celebrate one 
big, long holiday when I'm dead!


Thanks for any-and-all input here!


Best,

Zac



From: Martin Gainty 
Sent: Thursday, November 24, 2016 9:03:33 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Hi Zach


there is a rumour that today thursday is a holiday?

in server.properties how are you configuring your server?


specifically what are these attributes?


num.network.threads=


num.io.threads=


socket.send.buffer.bytes=


socket.receive.buffer.bytes=


socket.request.max.bytes=


num.partitions=


num.recovery.threads.per.data.dir=


?

Martin
__




From: Zac Harvey 
Sent: Thursday, November 24, 2016 7:05 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Anybody?!? This is very disconcerting!


From: Zac Harvey 
Sent: Wednesday, November 23, 2016 5:07:45 AM
To: users@kafka.apache.org
Subject: Messages intermittently get lost

I am playing around with Kafka and have a simple setup:


* 1-node Kafka (Ubuntu) server

* 3-node 

KStream window - end value out of bounds

2016-11-29 Thread Jon Yeargers
Using the following topology:

KStream kStream =
 kStreamBuilder.stream(stringSerde,transSerde,TOPIC);
KTable ktAgg =
kStream.groupByKey().aggregate(
SumRecordCollector::new,
new Aggregate(),
TimeWindows.of(20 * 60 * 1000).advanceBy(60 * 1000),
cSerde, "table_stream");


When looking at windows as follows:

   ktAgg.toStream().foreach((postKey, postValue) -> {
 LOGGER.debug("start: {}   end: {}",
postkey.window().start(), postkey.window().end());
   }

The 'start' values are coming through properly incremented but the 'end'
values are all 9223372036854775807.

Is there something wrong with my topology? Some other bug that would cause
this?


Re: Messages intermittently get lost

2016-11-29 Thread Zac Harvey
Does anybody have any idea why ZK might be to blame if messages sent by a Kafka 
producer fail to be received by a Kafka consumer?


From: Zac Harvey 
Sent: Monday, November 28, 2016 9:07:41 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Thanks Martin, I will look at those links.


But you seem to be 100% confident that the problem is with ZooKeeper...can I 
ask why? What is it about my problem description that makes you think this is 
an issue with ZooKeeper?


From: Martin Gainty 
Sent: Friday, November 25, 2016 1:46:28 PM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost




From: Zac Harvey 
Sent: Friday, November 25, 2016 6:17 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Hi Martin,


My server.properties looks like this:


listeners=PLAINTEXT://0.0.0.0:9092

advertised.host.name=

broker.id=2

port=9092

num.partitions=4

zookeeper.connect=zkA:2181,zkB:2181,zkC:2181

MG>can you check status for each ZK Node in the quorum?

sh>$ZOOKEEPER_HOME/bin/zkServer.sh status

http://www.ibm.com/support/knowledgecenter/SSCRJU_4.0.0/com.ibm.streams.pd.doc/doc/containerstreamszookeeper.html

ZooKeeper problems and solutions - 
IBM
www.ibm.com
Use these solutions to resolve the problems that you might encounter with 
Apache ZooKeeper.



MG>*greetings from Plimoth Mass*

MG>M


num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

log.dirs=/tmp/kafka-logs

num.recovery.threads.per.data.dir=1

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=30

zookeeper.connection.timeout.ms=6000

delete.topic.enable=true

auto.leader.rebalance.enable=true


Above, 'zkA', 'zkB' and 'zkC' are defined in /etc/hosts and are valid ZK 
servers, and  is the public DNS of the EC2 (AWS) node that 
this Kafka is running on.


Anything look incorrect to you?


And yes, yesterday was a holiday, but there is only work! I'll celebrate one 
big, long holiday when I'm dead!


Thanks for any-and-all input here!


Best,

Zac



From: Martin Gainty 
Sent: Thursday, November 24, 2016 9:03:33 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Hi Zach


there is a rumour that today thursday is a holiday?

in server.properties how are you configuring your server?


specifically what are these attributes?


num.network.threads=


num.io.threads=


socket.send.buffer.bytes=


socket.receive.buffer.bytes=


socket.request.max.bytes=


num.partitions=


num.recovery.threads.per.data.dir=


?

Martin
__




From: Zac Harvey 
Sent: Thursday, November 24, 2016 7:05 AM
To: users@kafka.apache.org
Subject: Re: Messages intermittently get lost

Anybody?!? This is very disconcerting!


From: Zac Harvey 
Sent: Wednesday, November 23, 2016 5:07:45 AM
To: users@kafka.apache.org
Subject: Messages intermittently get lost

I am playing around with Kafka and have a simple setup:


* 1-node Kafka (Ubuntu) server

* 3-node ZK cluster (each on their own Ubuntu server)


I have a consumer written in Scala and am using the kafka-console-producer 
(v0.10) that ships with the distribution.


I'd say about 20% of the messages I send via the producer never get consumed by 
the Scala process (which is running continuously). No errors on either side 
(producer or consumer): the producer sends, and, nothing...


Any ideas as to what might be going on here, or how I could start 
troubleshooting?


Thanks!


Re: kafka 0.10.1 ProcessorTopologyTestDriver issue with .map and .groupByKey.count

2016-11-29 Thread Hamidreza Afzali
I have created a JIRA issue:

https://issues.apache.org/jira/browse/KAFKA-4461


Hamid



question about property advertised.listeners

2016-11-29 Thread Aki Yoshida
I have a question regarding how to use advertised.listeners to expose
a different port to the outside of the container where the broker is
running.

When using advertised.listeners to register a different port as the
actual broker port, do we need a port forwarding from this configured
port to the actual broker port within the container so that the broker
itself can also find itself, right?

thanks.
regards, aki


Disadvantages of Upgrading Kafka server without upgrading client libraries?

2016-11-29 Thread Tim Visher
Hi Everyone,

I have an install of Kafka 0.8.2.1 which I'm upgrading to 0.10.1.0. I see
that Kafka 0.10.1.0 should be backwards compatible with client libraries
written for older versions but that newer client libraries are only
compatible with their version and up.

My question is what disadvantages would there be to never upgrading the
clients? I'm mainly asking because it would be advantageous to save some
time here with a little technical debt if the costs weren't too high. If
there are major issues then I can take on the client upgrade as well.

Thanks in advance!

--

In Christ,

Timmy V.

http://blog.twonegatives.com/
http://five.sentenc.es/ -- Spend less time on mail


Zookeeper stack trace

2016-11-29 Thread Jon Yeargers
2016-11-29 11:56:24,069 [main-SendThread(:2181)] DEBUG
org.apache.zookeeper.ClientCnxn - Got ping response for sessionid:
0xb58abf2daaf0011 after 0ms

Exception in thread "StreamThread-1"
org.apache.kafka.streams.errors.TaskAssignmentException: unable to decode
subscription data: version=2

at
org.apache.kafka.streams.processor.internals.assignment.SubscriptionInfo.decode(SubscriptionInfo.java:113)

at
org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.assign(StreamPartitionAssignor.java:237)

at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:272)

at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:404)

at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$900(AbstractCoordinator.java:81)

at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:358)

at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:340)

at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:679)

at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:658)

at
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)

at
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)

at
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)

at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:426)

at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:278)

at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:360)

at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:224)

at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:192)

at
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)

at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:243)

at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:366)

at
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:978)

at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:938)

at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:295)

at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:218)


Re: Question about state store

2016-11-29 Thread Eno Thereska
Hi Simon,

See if this helps:

- you can create the KTable with the state store "storeAdminItem" as you 
mentioned
- for each element you want to check, you want to query that state store 
directly to check if that element is in the state store. You can do the 
querying via the Interactive Query APIs (they basically get a handle on the 
state store and issue read-only requests to it). 
- There is a blog with a detailed example and code in the end here: 
https://www.confluent.io/blog/unifying-stream-processing-and-interactive-queries-in-apache-kafka/
 


Thanks
Eno

> On 29 Nov 2016, at 05:51, Simon Teles  wrote:
> 
> Hello all,
> 
> I'm not sure how to do the following use-case, if anyone can help :)
> 
> - I have an admin UI where the admin can choose wich item are allowed on the 
> application
> 
> - When the admin choose an item, it pushes an object to a topic kafka : 
> test.admin.item with the following object {"id":"1234", "name":"toto"}
> 
> - I have a Kafka Stream which is connected to a topic test.item which receive 
> all the items update from our DB.
> 
> - When the stream receives an item, it needs to check if its allowed by the 
> admin. If yes, it saves it in on another DB and pushes a notification on 
> another topic if the save on the DB is ok. If not, it does nothing.
> 
> My idea is when i start the stream, i "push" all the contents from the topic 
> test.admin.item to a state-store and when i receive a new item on test.item, 
> i check its id against the state-store.
> 
> Is this the proper way ?
> 
> My problem is :
> 
> -> if i use the TopologyBuilder, i don't know how can i load the topic on a 
> state-store at start to after use it on a Processor ?
> 
> -> With the KStreamBuilder i can use :
> 
> KStreamBuilder builder = new KStreamBuilder();
> 
> // We create a KTable from topic admin.item and load it on the Store 
> "AdminItem"
> builder.table(Serdes.String(), new SerdeItem(),"test.admin.item", 
> "storeAdminItem");
> 
> -> But with the KStreamBuilder, i don't how can i access the state-store when 
> i map/filter/etc ?
> 
> I you can help me figure it out, it would be much appreciated.
> 
> Thanks,
> 
> Regards,
> 



Re: Kafka 0.10.1.0 consumer group rebalance broken?

2016-11-29 Thread Radek Gruchalski
You’re most likely correct that it’s not that particular change.
That commit was introduced only 6 days ago, well after releasing 0.10.1.
An mvp would be helpful. Unless someone else on this list knows the issue
immediately.

–
Best regards,
Radek Gruchalski
ra...@gruchalski.com


On November 29, 2016 at 9:29:22 AM, Bart Vercammen (b...@cloutrix.com)
wrote:

Well, well, mr. Gruchalski, always nice to talk to you ;-)

Not sure whether it is indeed related to:
https://github.com/apache/kafka/commit/4b003d8bcfffded55a00b8ecc9eed8
eb373fb6c7#diff-d2461f78377b588c4f7e2fe8223d5679R633

But I'll have a look and will try to create a test scenario for this that's
able to reproduce the issue at hand.
I'll also include some logs in my following posts.

Thanks for the reply ... food for thought indeed ...


On Mon, Nov 28, 2016 at 10:17 PM, Radek Gruchalski 
wrote:

> There has been plenty of changes in the GroupCoordinator and co between
> these two releases:
> https://github.com/apache/kafka/commit/40b1dd3f495a59abef8a0cba545052
> 6994c92c04#diff-96e4cf31cd54def6b2fb3f7a118c1db3
>
> It might be related to this:
> https://github.com/apache/kafka/commit/4b003d8bcfffded55a00b8ecc9eed8
> eb373fb6c7#diff-d2461f78377b588c4f7e2fe8223d5679R633
>
> If your group is empty, your group is marked dead, when the group is dead,
> no matter what you do, it’ll reply with:
> https://github.com/apache/kafka/blob/trunk/core/src/
> main/scala/kafka/coordinator/GroupCoordinator.scala#L353
>
> Food for thought.
>
> –
> Best regards,
> Radek Gruchalski
> ra...@gruchalski.com
>
>
> On November 28, 2016 at 9:04:16 PM, Bart Vercammen (b...@cloutrix.com)
> wrote:
>
> Hi,
>
> It seems that consumer group rebalance is broken in Kafka 0.10.1.0 ?
> When running a small test-project :
> - consumers running in own JVM (with different 'client.id')
> - producer running in own JVM
> - kafka broker : the embedded kafka : KafkaServerStartable
>
> It looks like the consumers loose their hart-beat after a rebalance got
> triggered.
> In the logs on the consumer I can actually see that the heartbeat failed
> due to "invalid member_id"
>
> When running the exact same code on a 0.10.0.1 setup, all works perfectly.
> Anyone else seen this problem?
>
> Greets,
> Bart
>
>


--
Mvg,
Bart Vercammen

[image: Picture]
clouTrix BVBA
+32 486 69 17 68
i...@cloutrix.com


Re: Kafka 0.10.1.0 consumer group rebalance broken?

2016-11-29 Thread Bart Vercammen
Well, well, mr. Gruchalski, always nice to talk to you ;-)

Not sure whether it is indeed related to:
https://github.com/apache/kafka/commit/4b003d8bcfffded55a00b8ecc9eed8
eb373fb6c7#diff-d2461f78377b588c4f7e2fe8223d5679R633

But I'll have a look and will try to create a test scenario for this that's
able to reproduce the issue at hand.
I'll also include some logs in my following posts.

Thanks for the reply ... food for thought indeed ...


On Mon, Nov 28, 2016 at 10:17 PM, Radek Gruchalski 
wrote:

> There has been plenty of changes in the GroupCoordinator and co between
> these two releases:
> https://github.com/apache/kafka/commit/40b1dd3f495a59abef8a0cba545052
> 6994c92c04#diff-96e4cf31cd54def6b2fb3f7a118c1db3
>
> It might be related to this:
> https://github.com/apache/kafka/commit/4b003d8bcfffded55a00b8ecc9eed8
> eb373fb6c7#diff-d2461f78377b588c4f7e2fe8223d5679R633
>
> If your group is empty, your group is marked dead, when the group is dead,
> no matter what you do, it’ll reply with:
> https://github.com/apache/kafka/blob/trunk/core/src/
> main/scala/kafka/coordinator/GroupCoordinator.scala#L353
>
> Food for thought.
>
> –
> Best regards,
> Radek Gruchalski
> ra...@gruchalski.com
>
>
> On November 28, 2016 at 9:04:16 PM, Bart Vercammen (b...@cloutrix.com)
> wrote:
>
> Hi,
>
> It seems that consumer group rebalance is broken in Kafka 0.10.1.0 ?
> When running a small test-project :
> - consumers running in own JVM (with different 'client.id')
> - producer running in own JVM
> - kafka broker : the embedded kafka : KafkaServerStartable
>
> It looks like the consumers loose their hart-beat after a rebalance got
> triggered.
> In the logs on the consumer I can actually see that the heartbeat failed
> due to "invalid member_id"
>
> When running the exact same code on a 0.10.0.1 setup, all works perfectly.
> Anyone else seen this problem?
>
> Greets,
> Bart
>
>


-- 
Mvg,
Bart Vercammen

[image: Picture]
clouTrix BVBA
+32 486 69 17 68
i...@cloutrix.com