Re: [kafka-clients] [VOTE] 2.0.0 RC1

2018-06-29 Thread Manikumar
looks like maven artifacts are not updated in the staging repo. They are
still at old timestamp.
https://repository.apache.org/content/groups/staging/org/apache/kafka/kafka_2.11/2.0.0/

On Sat, Jun 30, 2018 at 12:06 AM Rajini Sivaram 
wrote:

> Hello Kafka users, developers and client-developers,
>
>
> This is the second candidate for release of Apache Kafka 2.0.0.
>
>
> This is a major version release of Apache Kafka. It includes 40 new  KIPs
> and
>
> several critical bug fixes. Please see the 2.0.0 release plan for more
> details:
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820
>
>
> A few notable highlights:
>
>- Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
>(KIP-277)
>- SASL/OAUTHBEARER implementation (KIP-255)
>- Improved quota communication and customization of quotas (KIP-219,
>KIP-257)
>- Efficient memory usage for down conversion (KIP-283)
>- Fix log divergence between leader and follower during fast leader
>failover (KIP-279)
>- Drop support for Java 7 and remove deprecated code including old
>scala clients
>- Connect REST extension plugin, support for externalizing secrets and
>improved error handling (KIP-285, KIP-297, KIP-298 etc.)
>- Scala API for Kafka Streams and other Streams API improvements
>(KIP-270, KIP-150, KIP-245, KIP-251 etc.)
>
> Release notes for the 2.0.0 release:
>
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/RELEASE_NOTES.html
>
>
>
> *** Please download, test and vote by Tuesday, July 3rd, 4pm PT
>
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
>
> http://kafka.apache.org/KEYS
>
>
> * Release artifacts to be voted upon (source and binary):
>
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/
>
>
> * Maven artifacts to be voted upon:
>
> https://repository.apache.org/content/groups/staging/
>
>
> * Javadoc:
>
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/javadoc/
>
>
> * Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:
>
> https://github.com/apache/kafka/tree/2.0.0-rc1
>
>
> * Documentation:
>
> http://kafka.apache.org/20/documentation.html
>
>
> * Protocol:
>
> http://kafka.apache.org/20/protocol.html
>
>
> * Successful Jenkins builds for the 2.0 branch:
>
> Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/66/
>
> System tests:
> https://jenkins.confluent.io/job/system-test-kafka/job/2.0/15/
>
>
>
> Please test and verify the release artifacts and submit a vote for this RC
> or report any issues so that we can fix them and roll out a new RC ASAP!
>
> Although this release vote requires PMC votes to pass, testing, votes,
> and bug
> reports are valuable and appreciated from everyone.
>
>
> Thanks,
>
>
> Rajini
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at https://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CAOJcB39GdTWOaK4qysvyPyGU8Ldm82t_TA364x1MP8a8OAod6A%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>


Re: Partitions reassignment is failing in Kafka 1.1.0

2018-06-29 Thread Debraj Manna
It is problem on my side. The code was changing the replicas count but not
the log_dirs. Since I am migrating from 0.10 this part of the code was not
changed.

I have a follow up question what is the default value of log_dirs if I
don't specify it in reassignment.json ?

On Sat, Jun 30, 2018 at 11:15 AM, Debraj Manna 
wrote:

> I am generating the reassignent.json like below
>
> /home/ubuntu/deploy/kafka/bin/kafka-reassign-partitions.sh --zookeeper 
> 127.0.0.1:2181 --generate --topics-to-move-json-file 
> /home/ubuntu/deploy/kafka/topics_to_move.json --broker-list '%s' |tail -1 > 
> /home/ubuntu/deploy/kafka/reassignment.json"
>
> Then I am doing the reassignment using the generated file
>
> /home/ubuntu/deploy/kafka/bin/kafka-reassign-partitions.sh --zookeeper 
> 127.0.0.1:2181 --execute --reassignment-json-file 
> /home/ubuntu/deploy/kafka/reassignment.json
>
> kafka-reassign-partitions.sh helps states
>
> The JSON file with the partition reassignment configurationThe format to
>> use is -
>> {"partitions":[{"topic": "foo", "partition": 1, "replicas": [1,2,3],
>> "log_dirs": ["dir1","dir2","dir3"]}], "version":1} Note that "log_dirs" is
>> optional. When it is specified, its length must equal the length of the
>> replicas list. The value in this list can be either "any" or the absolution
>> path of the log directory on the broker. If absolute log directory path is
>> specified, it is currently required that the replica has not already been
>> created on that broker. The replica will then be created in the specified
>> log directory on the broker later.
>
>
> So it appears reassignment json that is generated by
> kafka-reassign-partions.sh is creating an issue with logdirs. Is this
> some issue in kafka-reassign-partitions.sh or some misconfiguration from my
> side. ?
>
> On Sat, Jun 30, 2018 at 10:26 AM, Debraj Manna 
> wrote:
>
>> Please find the server.properties from one of the broker.
>>
>> broker.id=0
>> port=9092
>> num.network.threads=3
>> num.io.threads=8
>> socket.send.buffer.bytes=102400
>> socket.receive.buffer.bytes=102400
>> socket.request.max.bytes=104857600
>> log.dirs=/var/lib/kafka/kafka-logs
>> num.recovery.threads.per.data.dir=1
>> log.retention.hours=36
>> log.retention.bytes=1073741824
>> log.segment.bytes=536870912
>> log.retention.check.interval.ms=30
>> log.cleaner.enable=false
>> zookeeper.connect=platform1:2181,platform2:2181,platform3:2181
>> message.max.bytes=1500
>> replica.fetch.max.bytes=1500
>> auto.create.topics.enable=true
>> zookeeper.connection.timeout.ms=6000
>> unclean.leader.election.enable=false
>> delete.topic.enable=false
>> offsets.topic.replication.factor=1
>> transaction.state.log.replication.factor=1
>> transaction.state.log.min.isr=1
>>
>> I have placed server.log from a broker at https://gist.github.com/deb
>> raj-manna/4b4bdae8a1c15c36b313a04f37e8776d
>>
>> On Sat, Jun 30, 2018 at 8:16 AM, Ted Yu  wrote:
>>
>>> Seems to be related to KIP-113.
>>>
>>> server.properties didn't go thru. Do you mind pastebin'ing its content ?
>>>
>>> If you can pastebin logs from broker, that should help.
>>>
>>> Thanks
>>>
>>> On Fri, Jun 29, 2018 at 10:37 AM, Debraj Manna >> >
>>> wrote:
>>>
>>> > Hi
>>> >
>>> > I altered a topic like below in kafka 1.1.0
>>> >
>>> > /home/ubuntu/deploy/kafka/bin/kafka-topics.sh --zookeeper
>>> 127.0.0.1:2181
>>> > --alter --topic Topic3 --config min.insync.replicas=2
>>> >
>>> > But whenever I am trying to verify the reassignment it is showing the
>>> > below exception
>>> >
>>> > /home/ubuntu/deploy/kafka/bin/kafka-reassign-partitions.sh
>>> --zookeeper 127.0.0.1:2181 --reassignment-json-file
>>> /home/ubuntu/deploy/kafka/reassignment.json --verify
>>> >
>>> > Partitions reassignment failed due to Size of replicas list Vector(3,
>>> 0, 2) is different from size of log dirs list Vector(any) for partition
>>> Topic3-7
>>> > kafka.common.AdminCommandFailedException: Size of replicas list
>>> Vector(3, 0, 2) is different from size of log dirs list Vector(any) for
>>> partition Topic3-7
>>> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio
>>> nReassignmentData$1$$anonfun$apply$4$$anonfun$apply$5.apply(
>>> ReassignPartitionsCommand.scala:262)
>>> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio
>>> nReassignmentData$1$$anonfun$apply$4$$anonfun$apply$5.apply(
>>> ReassignPartitionsCommand.scala:251)
>>> >   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>>> >   at scala.collection.AbstractIterator.foreach(Iterator.scala:133
>>> 4)
>>> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio
>>> nReassignmentData$1$$anonfun$apply$4.apply(ReassignPartition
>>> sCommand.scala:251)
>>> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio
>>> nReassignmentData$1$$anonfun$apply$4.apply(ReassignPartition
>>> sCommand.scala:250)
>>> >   at scala.collection.immutable.List.foreach(List.scala:392)
>>> >   at kafka.admin.ReassignParti

Re: Partitions reassignment is failing in Kafka 1.1.0

2018-06-29 Thread Debraj Manna
I am generating the reassignent.json like below

/home/ubuntu/deploy/kafka/bin/kafka-reassign-partitions.sh --zookeeper
127.0.0.1:2181 --generate --topics-to-move-json-file
/home/ubuntu/deploy/kafka/topics_to_move.json --broker-list '%s' |tail
-1 > /home/ubuntu/deploy/kafka/reassignment.json"

Then I am doing the reassignment using the generated file

/home/ubuntu/deploy/kafka/bin/kafka-reassign-partitions.sh --zookeeper
127.0.0.1:2181 --execute --reassignment-json-file
/home/ubuntu/deploy/kafka/reassignment.json

kafka-reassign-partitions.sh helps states

The JSON file with the partition reassignment configurationThe format to
> use is -
> {"partitions":[{"topic": "foo", "partition": 1, "replicas": [1,2,3],
> "log_dirs": ["dir1","dir2","dir3"]}], "version":1} Note that "log_dirs" is
> optional. When it is specified, its length must equal the length of the
> replicas list. The value in this list can be either "any" or the absolution
> path of the log directory on the broker. If absolute log directory path is
> specified, it is currently required that the replica has not already been
> created on that broker. The replica will then be created in the specified
> log directory on the broker later.


So it appears reassignment json that is generated by
kafka-reassign-partions.sh is creating an issue with logdirs. Is this some
issue in kafka-reassign-partitions.sh or some misconfiguration from my
side. ?

On Sat, Jun 30, 2018 at 10:26 AM, Debraj Manna 
wrote:

> Please find the server.properties from one of the broker.
>
> broker.id=0
> port=9092
> num.network.threads=3
> num.io.threads=8
> socket.send.buffer.bytes=102400
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> log.dirs=/var/lib/kafka/kafka-logs
> num.recovery.threads.per.data.dir=1
> log.retention.hours=36
> log.retention.bytes=1073741824
> log.segment.bytes=536870912
> log.retention.check.interval.ms=30
> log.cleaner.enable=false
> zookeeper.connect=platform1:2181,platform2:2181,platform3:2181
> message.max.bytes=1500
> replica.fetch.max.bytes=1500
> auto.create.topics.enable=true
> zookeeper.connection.timeout.ms=6000
> unclean.leader.election.enable=false
> delete.topic.enable=false
> offsets.topic.replication.factor=1
> transaction.state.log.replication.factor=1
> transaction.state.log.min.isr=1
>
> I have placed server.log from a broker at https://gist.github.com/
> debraj-manna/4b4bdae8a1c15c36b313a04f37e8776d
>
> On Sat, Jun 30, 2018 at 8:16 AM, Ted Yu  wrote:
>
>> Seems to be related to KIP-113.
>>
>> server.properties didn't go thru. Do you mind pastebin'ing its content ?
>>
>> If you can pastebin logs from broker, that should help.
>>
>> Thanks
>>
>> On Fri, Jun 29, 2018 at 10:37 AM, Debraj Manna 
>> wrote:
>>
>> > Hi
>> >
>> > I altered a topic like below in kafka 1.1.0
>> >
>> > /home/ubuntu/deploy/kafka/bin/kafka-topics.sh --zookeeper
>> 127.0.0.1:2181
>> > --alter --topic Topic3 --config min.insync.replicas=2
>> >
>> > But whenever I am trying to verify the reassignment it is showing the
>> > below exception
>> >
>> > /home/ubuntu/deploy/kafka/bin/kafka-reassign-partitions.sh --zookeeper
>> 127.0.0.1:2181 --reassignment-json-file 
>> /home/ubuntu/deploy/kafka/reassignment.json
>> --verify
>> >
>> > Partitions reassignment failed due to Size of replicas list Vector(3,
>> 0, 2) is different from size of log dirs list Vector(any) for partition
>> Topic3-7
>> > kafka.common.AdminCommandFailedException: Size of replicas list
>> Vector(3, 0, 2) is different from size of log dirs list Vector(any) for
>> partition Topic3-7
>> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio
>> nReassignmentData$1$$anonfun$apply$4$$anonfun$apply$5.
>> apply(ReassignPartitionsCommand.scala:262)
>> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio
>> nReassignmentData$1$$anonfun$apply$4$$anonfun$apply$5.
>> apply(ReassignPartitionsCommand.scala:251)
>> >   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>> >   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio
>> nReassignmentData$1$$anonfun$apply$4.apply(ReassignPartitio
>> nsCommand.scala:251)
>> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio
>> nReassignmentData$1$$anonfun$apply$4.apply(ReassignPartitio
>> nsCommand.scala:250)
>> >   at scala.collection.immutable.List.foreach(List.scala:392)
>> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio
>> nReassignmentData$1.apply(ReassignPartitionsCommand.scala:250)
>> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitio
>> nReassignmentData$1.apply(ReassignPartitionsCommand.scala:249)
>> >   at scala.collection.immutable.List.foreach(List.scala:392)
>> >   at kafka.admin.ReassignPartitionsCommand$.parsePartitionReassig
>> nmentData(ReassignPartitionsCommand.scala:249)
>> >   at kafka.admin.Reassi

Re: [kafka-clients] [VOTE] 1.1.1 RC2

2018-06-29 Thread Matthias J. Sax
Hi Dong,

it seems that the kafka-streams-quickstart artifacts are missing. Is it
just me or is the RC incomplete?


-Matthias


On 6/29/18 4:07 PM, Rajini Sivaram wrote:
> Hi Dong,
> 
> +1 (binding)
> 
> Verified binary using quick start, ran tests from source, checked
> release notes.
> 
> Thanks for running the release!
> 
> Regards,
> 
> Rajini
> 
> On Fri, Jun 29, 2018 at 11:11 PM, Jun Rao  > wrote:
> 
> Hi, Dong,
> 
> Thanks for running the release. Verified quickstart on scala 2.12
> binary. +1
> 
> Jun
> 
> On Thu, Jun 28, 2018 at 6:12 PM, Dong Lin  > wrote:
> 
> > Hello Kafka users, developers and client-developers,
> >
> > This is the second candidate for release of Apache Kafka 1.1.1.
> >
> > Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that
> was first
> > released with 1.1.0 about 3 months ago. We have fixed about 25
> issues since
> > that release. A few of the more significant fixes include:
> >
> > KAFKA-6925  > - Fix
> > memory leak in StreamsMetricsThreadImpl
> > KAFKA-6937  > - In-sync
> > replica delayed during fetch if replica throttle is exceeded
> > KAFKA-6917  > - Process
> > txn completion asynchronously to avoid deadlock
> > KAFKA-6893  > - Create
> > processors before starting acceptor to avoid ArithmeticException
> > KAFKA-6870  > -
> > Fix ConcurrentModificationException in SampledStat
> > KAFKA-6878  > - Fix
> > NullPointerException when querying global state store
> > KAFKA-6879  > - Invoke
> > session init callbacks outside lock to avoid Controller deadlock
> > KAFKA-6857  > - Prevent
> > follower from truncating to the wrong offset if undefined leader
> epoch is
> > requested
> > KAFKA-6854  > - Log
> > cleaner fails with transaction markers that are deleted during clean
> > KAFKA-6747  > - Check
> > whether there is in-flight transaction before aborting transaction
> > KAFKA-6748  > - Double
> > check before scheduling a new task after the punctuate call
> > KAFKA-6739  > -
> > Fix IllegalArgumentException when down-converting from V2 to V0/V1
> > KAFKA-6728  > -
> > Fix NullPointerException when instantiating the HeaderConverter
> >
> > Kafka 1.1.1 release plan:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
> 
> >
> > Release notes for the 1.1.1 release:
> > http://home.apache.org/~lindong/kafka-1.1.1-rc2/RELEASE_NOTES.html
> 
> >
> > *** Please download, test and vote by Thursday, July 3, 12pm PT ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~lindong/kafka-1.1.1-rc2/
> 
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> 
> >
> > * Javadoc:
> > http://home.apache.org/~lindong/kafka-1.1.1-rc2/javadoc/
> 
> >
> > * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc2 tag:
> > https://github.com/apache/kafka/tree/1.1.1-rc2
> 

[VOTE] 1.0.2 RC1

2018-06-29 Thread Matthias J. Sax
Hello Kafka users, developers and client-developers,

This is the second candidate for release of Apache Kafka 1.0.2.

This is a bug fix release addressing 27 tickets:
https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.0.2

Release notes for the 1.0.2 release:
http://home.apache.org/~mjsax/kafka-1.0.2-rc1/RELEASE_NOTES.html

*** Please download, test and vote by end of next week (7/6/18).

Kafka's KEYS file containing PGP keys we use to sign the release:
http://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
http://home.apache.org/~mjsax/kafka-1.0.2-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/

* Javadoc:
http://home.apache.org/~mjsax/kafka-1.0.2-rc1/javadoc/

* Tag to be voted upon (off 1.0 branch) is the 1.0.2 tag:
https://github.com/apache/kafka/releases/tag/1.0.2-rc1

* Documentation:
http://kafka.apache.org/10/documentation.html

* Protocol:
http://kafka.apache.org/10/protocol.html

* Successful Jenkins builds for the 1.0 branch:
Unit/integration tests: https://builds.apache.org/job/kafka-1.0-jdk7/214/
System tests:
https://jenkins.confluent.io/job/system-test-kafka/job/1.0/225/

/**

Thanks,
  -Matthias




signature.asc
Description: OpenPGP digital signature


Re: Partitions reassignment is failing in Kafka 1.1.0

2018-06-29 Thread Debraj Manna
Please find the server.properties from one of the broker.

broker.id=0
port=9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/var/lib/kafka/kafka-logs
num.recovery.threads.per.data.dir=1
log.retention.hours=36
log.retention.bytes=1073741824
log.segment.bytes=536870912
log.retention.check.interval.ms=30
log.cleaner.enable=false
zookeeper.connect=platform1:2181,platform2:2181,platform3:2181
message.max.bytes=1500
replica.fetch.max.bytes=1500
auto.create.topics.enable=true
zookeeper.connection.timeout.ms=6000
unclean.leader.election.enable=false
delete.topic.enable=false
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

I have placed server.log from a broker at
https://gist.github.com/debraj-manna/4b4bdae8a1c15c36b313a04f37e8776d

On Sat, Jun 30, 2018 at 8:16 AM, Ted Yu  wrote:

> Seems to be related to KIP-113.
>
> server.properties didn't go thru. Do you mind pastebin'ing its content ?
>
> If you can pastebin logs from broker, that should help.
>
> Thanks
>
> On Fri, Jun 29, 2018 at 10:37 AM, Debraj Manna 
> wrote:
>
> > Hi
> >
> > I altered a topic like below in kafka 1.1.0
> >
> > /home/ubuntu/deploy/kafka/bin/kafka-topics.sh --zookeeper 127.0.0.1:2181
> > --alter --topic Topic3 --config min.insync.replicas=2
> >
> > But whenever I am trying to verify the reassignment it is showing the
> > below exception
> >
> > /home/ubuntu/deploy/kafka/bin/kafka-reassign-partitions.sh --zookeeper
> 127.0.0.1:2181 --reassignment-json-file 
> /home/ubuntu/deploy/kafka/reassignment.json
> --verify
> >
> > Partitions reassignment failed due to Size of replicas list Vector(3, 0,
> 2) is different from size of log dirs list Vector(any) for partition
> Topic3-7
> > kafka.common.AdminCommandFailedException: Size of replicas list
> Vector(3, 0, 2) is different from size of log dirs list Vector(any) for
> partition Topic3-7
> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$
> parsePartitionReassignmentData$1$$anonfun$apply$4$$anonfun$apply$5.apply(
> ReassignPartitionsCommand.scala:262)
> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$
> parsePartitionReassignmentData$1$$anonfun$apply$4$$anonfun$apply$5.apply(
> ReassignPartitionsCommand.scala:251)
> >   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> >   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$
> parsePartitionReassignmentData$1$$anonfun$apply$4.apply(
> ReassignPartitionsCommand.scala:251)
> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$
> parsePartitionReassignmentData$1$$anonfun$apply$4.apply(
> ReassignPartitionsCommand.scala:250)
> >   at scala.collection.immutable.List.foreach(List.scala:392)
> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$
> parsePartitionReassignmentData$1.apply(ReassignPartitionsCommand.
> scala:250)
> >   at kafka.admin.ReassignPartitionsCommand$$anonfun$
> parsePartitionReassignmentData$1.apply(ReassignPartitionsCommand.
> scala:249)
> >   at scala.collection.immutable.List.foreach(List.scala:392)
> >   at kafka.admin.ReassignPartitionsCommand$.
> parsePartitionReassignmentData(ReassignPartitionsCommand.scala:249)
> >   at kafka.admin.ReassignPartitionsCommand$.verifyAssignment(
> ReassignPartitionsCommand.scala:90)
> >   at kafka.admin.ReassignPartitionsCommand$.verifyAssignment(
> ReassignPartitionsCommand.scala:84)
> >   at kafka.admin.ReassignPartitionsCommand$.main(
> ReassignPartitionsCommand.scala:58)
> >   at kafka.admin.ReassignPartitionsCommand.main(
> ReassignPartitionsCommand.scala)
> >
> >
> > My reassignment.json & server.properties is attached. Same thing used to
> > work fine in kafka 0.10. Can someone let me what is going wrong? Is
> > anything changed related to this in kafka 1.1.0 ?
> >
>


Re: Partitions reassignment is failing in Kafka 1.1.0

2018-06-29 Thread Ted Yu
Seems to be related to KIP-113.

server.properties didn't go thru. Do you mind pastebin'ing its content ?

If you can pastebin logs from broker, that should help.

Thanks

On Fri, Jun 29, 2018 at 10:37 AM, Debraj Manna 
wrote:

> Hi
>
> I altered a topic like below in kafka 1.1.0
>
> /home/ubuntu/deploy/kafka/bin/kafka-topics.sh --zookeeper 127.0.0.1:2181
> --alter --topic Topic3 --config min.insync.replicas=2
>
> But whenever I am trying to verify the reassignment it is showing the
> below exception
>
> /home/ubuntu/deploy/kafka/bin/kafka-reassign-partitions.sh --zookeeper 
> 127.0.0.1:2181 --reassignment-json-file 
> /home/ubuntu/deploy/kafka/reassignment.json --verify
>
> Partitions reassignment failed due to Size of replicas list Vector(3, 0, 2) 
> is different from size of log dirs list Vector(any) for partition Topic3-7
> kafka.common.AdminCommandFailedException: Size of replicas list Vector(3, 0, 
> 2) is different from size of log dirs list Vector(any) for partition Topic3-7
>   at 
> kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1$$anonfun$apply$4$$anonfun$apply$5.apply(ReassignPartitionsCommand.scala:262)
>   at 
> kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1$$anonfun$apply$4$$anonfun$apply$5.apply(ReassignPartitionsCommand.scala:251)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at 
> kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1$$anonfun$apply$4.apply(ReassignPartitionsCommand.scala:251)
>   at 
> kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1$$anonfun$apply$4.apply(ReassignPartitionsCommand.scala:250)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1.apply(ReassignPartitionsCommand.scala:250)
>   at 
> kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1.apply(ReassignPartitionsCommand.scala:249)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> kafka.admin.ReassignPartitionsCommand$.parsePartitionReassignmentData(ReassignPartitionsCommand.scala:249)
>   at 
> kafka.admin.ReassignPartitionsCommand$.verifyAssignment(ReassignPartitionsCommand.scala:90)
>   at 
> kafka.admin.ReassignPartitionsCommand$.verifyAssignment(ReassignPartitionsCommand.scala:84)
>   at 
> kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:58)
>   at 
> kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
>
>
> My reassignment.json & server.properties is attached. Same thing used to
> work fine in kafka 0.10. Can someone let me what is going wrong? Is
> anything changed related to this in kafka 1.1.0 ?
>


Re: Partitions reassignment is failing in Kafka 1.1.0

2018-06-29 Thread Debraj Manna
Hi

Anyone any thoughts?

On Fri 29 Jun, 2018, 11:07 PM Debraj Manna, 
wrote:

> Hi
>
> I altered a topic like below in kafka 1.1.0
>
> /home/ubuntu/deploy/kafka/bin/kafka-topics.sh --zookeeper 127.0.0.1:2181
> --alter --topic Topic3 --config min.insync.replicas=2
>
> But whenever I am trying to verify the reassignment it is showing the
> below exception
>
> /home/ubuntu/deploy/kafka/bin/kafka-reassign-partitions.sh --zookeeper 
> 127.0.0.1:2181 --reassignment-json-file 
> /home/ubuntu/deploy/kafka/reassignment.json --verify
>
> Partitions reassignment failed due to Size of replicas list Vector(3, 0, 2) 
> is different from size of log dirs list Vector(any) for partition Topic3-7
> kafka.common.AdminCommandFailedException: Size of replicas list Vector(3, 0, 
> 2) is different from size of log dirs list Vector(any) for partition Topic3-7
>   at 
> kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1$$anonfun$apply$4$$anonfun$apply$5.apply(ReassignPartitionsCommand.scala:262)
>   at 
> kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1$$anonfun$apply$4$$anonfun$apply$5.apply(ReassignPartitionsCommand.scala:251)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:891)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
>   at 
> kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1$$anonfun$apply$4.apply(ReassignPartitionsCommand.scala:251)
>   at 
> kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1$$anonfun$apply$4.apply(ReassignPartitionsCommand.scala:250)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1.apply(ReassignPartitionsCommand.scala:250)
>   at 
> kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1.apply(ReassignPartitionsCommand.scala:249)
>   at scala.collection.immutable.List.foreach(List.scala:392)
>   at 
> kafka.admin.ReassignPartitionsCommand$.parsePartitionReassignmentData(ReassignPartitionsCommand.scala:249)
>   at 
> kafka.admin.ReassignPartitionsCommand$.verifyAssignment(ReassignPartitionsCommand.scala:90)
>   at 
> kafka.admin.ReassignPartitionsCommand$.verifyAssignment(ReassignPartitionsCommand.scala:84)
>   at 
> kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:58)
>   at 
> kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)
>
>
> My reassignment.json & server.properties is attached. Same thing used to
> work fine in kafka 0.10. Can someone let me what is going wrong? Is
> anything changed related to this in kafka 1.1.0 ?
>


Re: Visible intermediate state for KGroupedTable aggregation

2018-06-29 Thread Matthias J. Sax
I see what you are saying -- this is what I meant by eventual
consistency guarantee.

What is unclear to me is, why you need to re-group the KTable? The issue
you describe only occurs for this operation.

-Matthias

On 6/29/18 3:44 PM, Thilo-Alexander Ginkel wrote:
> On Sat, Jun 30, 2018 at 12:16 AM Matthias J. Sax  
> wrote:
>> You cannot suppress those records, because both are required for
>> correctness. Note, that each event might go to a different instance in
>> the downstream aggregation -- that's why both records are required.
>>
>> Not sure what the problem for your business logic is. Note, that Kafka
>> Streams provides eventual consistency guarantees. What guarantee do you
>> need?
> 
> Let's say, I have a stream of stock orders each of which is associated
> with a state. I'd like to aggregate the orders in records containing a
> map (state -> quantity) grouped by (account#, security id)
> representing changes in exposure. Currently, as an order advances its
> state (e.g., from "new" to "filled") the order shortly disappears from
> the "new" bucket before it appears in the "filled" bucket. As long as
> subsequent processing steps are performed by Kafka Streams that's not
> a big deal, but things get tricky once legacy systems are involved
> where you can't simply undo a transaction.
> 
> Having the ability to collapse both records would simplify this
> tremendously. The key will remain the same for both of them.
> 
> I hope this clarifies the scenario a little bit.
> 
> Thanks,
> Thilo
> 



signature.asc
Description: OpenPGP digital signature


Re: [kafka-clients] [VOTE] 0.11.0.3 RC0

2018-06-29 Thread Rajini Sivaram
Hi Matthias,

+1 (binding)

Verified binary using quick start, verified source by building and running
tests, checked release notes.

Thanks for running the release!

Regards,

Rajini


On Fri, Jun 29, 2018 at 11:07 PM, Jun Rao  wrote:

> Hi, Matthias,
>
> Thanks for running the release. Verified quickstart on scala 2.12 binary.
> +1
>
> Jun
>
> On Fri, Jun 22, 2018 at 3:14 PM, Matthias J. Sax 
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 0.11.0.3.
> >
> > This is a bug fix release closing 27 tickets:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.11.0.3
> >
> > Release notes for the 0.11.0.3 release:
> > http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
> > can close the vote on Wednesday.
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/javadoc/
> >
> > * Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.3 tag:
> > https://github.com/apache/kafka/releases/tag/0.11.0.3-rc0
> >
> > * Documentation:
> > http://kafka.apache.org/0110/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/0110/protocol.html
> >
> > * Successful Jenkins builds for the 0.11.0 branch:
> > Unit/integration tests: https://builds.apache.org/job/
> > kafka-0.11.0-jdk7/385/
> > System tests:
> > https://jenkins.confluent.io/job/system-test-kafka/job/0.11.0/217/
> >
> > /**
> >
> > Thanks,
> >   -Matthias
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kafka-clients+unsubscr...@googlegroups.com.
> > To post to this group, send email to kafka-clie...@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kafka-clients.
> > To view this discussion on the web visit https://groups.google.com/d/
> > msgid/kafka-clients/2f54734a-8e8d-3cd7-060b-5f2b3010a20e%40confluent.io.
> > For more options, visit https://groups.google.com/d/optout.
> >
>


Re: Possible bug? Duplicates when searching kafka stream state store with caching

2018-06-29 Thread Guozhang Wang
Hello Christian,

Since you are calling fetch(key, start, end) I'm assuming that duplicateStore
is a WindowedStore. With a windowed store, it is possible that a single key
can fall into multiple windows, and hence be returned from the
WindowStoreIterator,
note its type is , V>

So I'd first want to know

1) which Kafka version are you using.
2) why you'd need a window store, and if yes, could you consider using the
single point fetch (added in KAFKA-6560) other than the range query (which
is more expensive as well).



Guozhang


On Fri, Jun 29, 2018 at 11:38 AM, Christian Henry <
christian.henr...@gmail.com> wrote:

> Hi all,
>
> I'll first describe a simplified view of relevant parts of our setup (which
> should be enough to repro), describe the behavior we're seeing, and then
> note some information I've come across after digging in a bit.
>
> We have a kafka stream application, and one of our transform steps keeps a
> state store to filter out messages with a previously seen GUID. That is,
> our transform looks like:
>
> public KeyValue transform(byte[] key, String guid) {
> try (WindowStoreIterator iterator =
> duplicateStore.fetch(correlationId, start, now)) {
> if (iterator.hasNext()) {
> return null;
> } else {
> duplicateStore.put(correlationId, some metadata);
> return new KeyValue<>(key, message);
> }
> }}
>
> where the duplicateStore is a persistent windowed store with caching
> enabled.
>
> I was debugging some tests and found that sometimes when calling
> *all()* or *fetchAll()
> *on the duplicate store and stepping through the iterator, it would return
> the same guid more than once, even if it was only inserted into the store
> once. More specifically, if I had the following guids sent to the stream:
> [1, 2, ... 9] (for 9 values total), sometimes it would return
> 10 values, with one (or more) of the values being returned twice by the
> iterator. However, this would not show up with a *fetch(guid)* on that
> specific guid. For instance, if 1 was being returned twice by
> *fetchAll()*, calling *duplicateStore.fetch("1", start, end)* will
> still return an iterator with size of 1.
>
> I dug into this a bit more by setting a breakpoint in
> *SegmentedCacheFunction#compareSegmentedKeys(cacheKey,
> storeKey)* and watching the two input values as I looped through the
> iterator using "*while(iterator.hasNext()) { print(iterator.next()) }*". In
> one test, the duplicate value was 6, and saw the following behavior
> (trimming off the segment values from the byte input):
> -- compareSegmentedKeys(cacheKey = 6, storeKey = 2)
> -- next() returns 6
> and
> -- compareSegmentedKeys(cacheKey = 7, storeKey = 6)
> -- next() returns 6
> Besides those, the input values are the same and the output is as expected.
> Additionally, a coworker noted that the number of duplicates always matches
> the number of times *Long.compare(cacheSegmentId, storeSegmentId) *returns
> a non-zero value, indicating that duplicates are likely arising due to the
> segment comparison.
>



-- 
-- Guozhang


Re: [kafka-clients] [VOTE] 1.1.1 RC2

2018-06-29 Thread Rajini Sivaram
Hi Dong,

+1 (binding)

Verified binary using quick start, ran tests from source, checked release
notes.

Thanks for running the release!

Regards,

Rajini

On Fri, Jun 29, 2018 at 11:11 PM, Jun Rao  wrote:

> Hi, Dong,
>
> Thanks for running the release. Verified quickstart on scala 2.12 binary.
> +1
>
> Jun
>
> On Thu, Jun 28, 2018 at 6:12 PM, Dong Lin  wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the second candidate for release of Apache Kafka 1.1.1.
> >
> > Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was first
> > released with 1.1.0 about 3 months ago. We have fixed about 25 issues
> since
> > that release. A few of the more significant fixes include:
> >
> > KAFKA-6925  - Fix
> > memory leak in StreamsMetricsThreadImpl
> > KAFKA-6937  - In-sync
> > replica delayed during fetch if replica throttle is exceeded
> > KAFKA-6917  - Process
> > txn completion asynchronously to avoid deadlock
> > KAFKA-6893  - Create
> > processors before starting acceptor to avoid ArithmeticException
> > KAFKA-6870  -
> > Fix ConcurrentModificationException in SampledStat
> > KAFKA-6878  - Fix
> > NullPointerException when querying global state store
> > KAFKA-6879  - Invoke
> > session init callbacks outside lock to avoid Controller deadlock
> > KAFKA-6857  - Prevent
> > follower from truncating to the wrong offset if undefined leader epoch is
> > requested
> > KAFKA-6854  - Log
> > cleaner fails with transaction markers that are deleted during clean
> > KAFKA-6747  - Check
> > whether there is in-flight transaction before aborting transaction
> > KAFKA-6748  - Double
> > check before scheduling a new task after the punctuate call
> > KAFKA-6739  -
> > Fix IllegalArgumentException when down-converting from V2 to V0/V1
> > KAFKA-6728  -
> > Fix NullPointerException when instantiating the HeaderConverter
> >
> > Kafka 1.1.1 release plan:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
> >
> > Release notes for the 1.1.1 release:
> > http://home.apache.org/~lindong/kafka-1.1.1-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Thursday, July 3, 12pm PT ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~lindong/kafka-1.1.1-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~lindong/kafka-1.1.1-rc2/javadoc/
> >
> > * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc2 tag:
> > https://github.com/apache/kafka/tree/1.1.1-rc2
> >
> > * Documentation:
> > http://kafka.apache.org/11/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/11/protocol.html
> >
> > * Successful Jenkins builds for the 1.1 branch:
> > Unit/integration tests: *https://builds.apache.org/
> job/kafka-1.1-jdk7/157/
> > *
> > System tests: https://jenkins.confluent.io/job/system-test-kafka-br
> > anch-builder/1817
> >
> >
> > Please test and verify the release artifacts and submit a vote for this
> RC,
> > or report any issues so we can fix them and get a new RC out ASAP.
> Although
> > this release vote requires PMC votes to pass, testing, votes, and bug
> > reports are valuable and appreciated from everyone.
> >
> >
> > Regards,
> > Dong
> >
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kafka-clients+unsubscr...@googlegroups.com.
> > To post to this group, send email to kafka-clie...@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kafka-clients.
> > To view this discussion on the web visit https://groups.google.com/d/
> > msgid/kafka-clients/CAAaarBb1KsyD_KLuz6V4pfKQiUNQFLb9Lb_eNU%
> > 2BsWjd7Vr%2B_%2Bw%40mail.gmail.com
> >  KLuz6V4pfKQiUNQFLb9Lb_eNU%2BsWjd7Vr%2B_%2Bw%40mail.
> gmail.com?utm_medium=email&utm_source=footer>
> > .
> > For more options, visit https://groups.google.com/d/optout.
> >
>


Re: Visible intermediate state for KGroupedTable aggregation

2018-06-29 Thread Thilo-Alexander Ginkel
On Sat, Jun 30, 2018 at 12:16 AM Matthias J. Sax  wrote:
> You cannot suppress those records, because both are required for
> correctness. Note, that each event might go to a different instance in
> the downstream aggregation -- that's why both records are required.
>
> Not sure what the problem for your business logic is. Note, that Kafka
> Streams provides eventual consistency guarantees. What guarantee do you
> need?

Let's say, I have a stream of stock orders each of which is associated
with a state. I'd like to aggregate the orders in records containing a
map (state -> quantity) grouped by (account#, security id)
representing changes in exposure. Currently, as an order advances its
state (e.g., from "new" to "filled") the order shortly disappears from
the "new" bucket before it appears in the "filled" bucket. As long as
subsequent processing steps are performed by Kafka Streams that's not
a big deal, but things get tricky once legacy systems are involved
where you can't simply undo a transaction.

Having the ability to collapse both records would simplify this
tremendously. The key will remain the same for both of them.

I hope this clarifies the scenario a little bit.

Thanks,
Thilo


Re: Visible intermediate state for KGroupedTable aggregation

2018-06-29 Thread Matthias J. Sax
Hi,

You cannot suppress those records, because both are required for
correctness. Note, that each event might go to a different instance in
the downstream aggregation -- that's why both records are required.

Not sure what the problem for your business logic is. Note, that Kafka
Streams provides eventual consistency guarantees. What guarantee do you
need?


-Matthias



On 6/29/18 12:22 PM, Thilo-Alexander Ginkel wrote:
> Hello everyone,
> 
> I have implemented a Kafka Streams service using the Streams DSL.
> Within the topology I am using A KGroupedTable, on which I perform an
> aggregate using an adder and subtractor. AFAICS (at least when using
> TopologyTestDriver) the intermediate state created by the subtractor
> is pushed downstream as an update followed by another update after the
> adder has been called.
> 
> Is there a way to reliably suppress publishing of this intermediate
> state (which is inconsistent from a business point of view in my
> case)?
> 
> The docs indicate this, but this does not sound like a guarantee ;-):
> 
> -- 8< --
> Not all updates might get sent downstream, as an internal cache is
> used to deduplicate consecutive updates to the same key. The rate of
> propagated updates depends on your input data rate, the number of
> distinct keys, the number of parallel running Kafka Streams instances,
> and the configuration parameters for cache size, and commit intervall.
> -- 8< --
> 
> Thanks & kind regards
> Thilo
> 



signature.asc
Description: OpenPGP digital signature


Re: [kafka-clients] [VOTE] 1.1.1 RC2

2018-06-29 Thread Jun Rao
Hi, Dong,

Thanks for running the release. Verified quickstart on scala 2.12 binary. +1

Jun

On Thu, Jun 28, 2018 at 6:12 PM, Dong Lin  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 1.1.1.
>
> Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was first
> released with 1.1.0 about 3 months ago. We have fixed about 25 issues since
> that release. A few of the more significant fixes include:
>
> KAFKA-6925  - Fix
> memory leak in StreamsMetricsThreadImpl
> KAFKA-6937  - In-sync
> replica delayed during fetch if replica throttle is exceeded
> KAFKA-6917  - Process
> txn completion asynchronously to avoid deadlock
> KAFKA-6893  - Create
> processors before starting acceptor to avoid ArithmeticException
> KAFKA-6870  -
> Fix ConcurrentModificationException in SampledStat
> KAFKA-6878  - Fix
> NullPointerException when querying global state store
> KAFKA-6879  - Invoke
> session init callbacks outside lock to avoid Controller deadlock
> KAFKA-6857  - Prevent
> follower from truncating to the wrong offset if undefined leader epoch is
> requested
> KAFKA-6854  - Log
> cleaner fails with transaction markers that are deleted during clean
> KAFKA-6747  - Check
> whether there is in-flight transaction before aborting transaction
> KAFKA-6748  - Double
> check before scheduling a new task after the punctuate call
> KAFKA-6739  -
> Fix IllegalArgumentException when down-converting from V2 to V0/V1
> KAFKA-6728  -
> Fix NullPointerException when instantiating the HeaderConverter
>
> Kafka 1.1.1 release plan:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
>
> Release notes for the 1.1.1 release:
> http://home.apache.org/~lindong/kafka-1.1.1-rc2/RELEASE_NOTES.html
>
> *** Please download, test and vote by Thursday, July 3, 12pm PT ***
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~lindong/kafka-1.1.1-rc2/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~lindong/kafka-1.1.1-rc2/javadoc/
>
> * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc2 tag:
> https://github.com/apache/kafka/tree/1.1.1-rc2
>
> * Documentation:
> http://kafka.apache.org/11/documentation.html
>
> * Protocol:
> http://kafka.apache.org/11/protocol.html
>
> * Successful Jenkins builds for the 1.1 branch:
> Unit/integration tests: *https://builds.apache.org/job/kafka-1.1-jdk7/157/
> *
> System tests: https://jenkins.confluent.io/job/system-test-kafka-br
> anch-builder/1817
>
>
> Please test and verify the release artifacts and submit a vote for this RC,
> or report any issues so we can fix them and get a new RC out ASAP. Although
> this release vote requires PMC votes to pass, testing, votes, and bug
> reports are valuable and appreciated from everyone.
>
>
> Regards,
> Dong
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at https://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/kafka-clients/CAAaarBb1KsyD_KLuz6V4pfKQiUNQFLb9Lb_eNU%
> 2BsWjd7Vr%2B_%2Bw%40mail.gmail.com
> 
> .
> For more options, visit https://groups.google.com/d/optout.
>


Re: [kafka-clients] [VOTE] 0.11.0.3 RC0

2018-06-29 Thread Jun Rao
Hi, Matthias,

Thanks for running the release. Verified quickstart on scala 2.12 binary. +1

Jun

On Fri, Jun 22, 2018 at 3:14 PM, Matthias J. Sax 
wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 0.11.0.3.
>
> This is a bug fix release closing 27 tickets:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.11.0.3
>
> Release notes for the 0.11.0.3 release:
> http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
> can close the vote on Wednesday.
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/javadoc/
>
> * Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.3 tag:
> https://github.com/apache/kafka/releases/tag/0.11.0.3-rc0
>
> * Documentation:
> http://kafka.apache.org/0110/documentation.html
>
> * Protocol:
> http://kafka.apache.org/0110/protocol.html
>
> * Successful Jenkins builds for the 0.11.0 branch:
> Unit/integration tests: https://builds.apache.org/job/
> kafka-0.11.0-jdk7/385/
> System tests:
> https://jenkins.confluent.io/job/system-test-kafka/job/0.11.0/217/
>
> /**
>
> Thanks,
>   -Matthias
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at https://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/kafka-clients/2f54734a-8e8d-3cd7-060b-5f2b3010a20e%40confluent.io.
> For more options, visit https://groups.google.com/d/optout.
>


Re: [kafka-clients] [VOTE] 0.10.2.2 RC1

2018-06-29 Thread Jun Rao
Hi, Matthias,

Thanks for running the release. Verified quickstart on scala 2.12 binary. +1

Jun

On Fri, Jun 22, 2018 at 6:43 PM, Matthias J. Sax 
wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 0.10.2.2.
>
> Note, that RC0 was created before the upgrade to Gradle 4.8.1 and thus,
> we discarded it in favor of RC1 (without sending out a email for RC0).
>
> This is a bug fix release closing 29 tickets:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.2.2
>
> Release notes for the 0.10.2.2 release:
> http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
> can close the vote on Wednesday.
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/javadoc/
>
> * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.2 tag:
> https://github.com/apache/kafka/releases/tag/0.10.2.2-rc1
>
> * Documentation:
> http://kafka.apache.org/0102/documentation.html
>
> * Protocol:
> http://kafka.apache.org/0102/protocol.html
>
> * Successful Jenkins builds for the 0.10.2 branch:
> Unit/integration tests: https://builds.apache.org/job/
> kafka-0.10.2-jdk7/220/
>
> /**
>
> Thanks,
>   -Matthias
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To post to this group, send email to kafka-clie...@googlegroups.com.
> Visit this group at https://groups.google.com/group/kafka-clients.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/kafka-clients/8228115e-6913-5c37-06d9-7410a7bd7f69%40confluent.io.
> For more options, visit https://groups.google.com/d/optout.
>


Re: [VOTE] 1.0.2 RC0

2018-06-29 Thread Matthias J. Sax
Thanks for pointing out Jason.

I double checked and there was a bug in the release script. The artifact
were not built.

I'll roll a new RC.


-Matthias

On 6/29/18 2:01 PM, Jason Gustafson wrote:
> Hey Matthias,
> 
> I don't see the artifacts for scala 2.12. Did they not get uploaded?
> 
> Thanks,
> Jason
> 
> On Fri, Jun 29, 2018 at 12:41 PM, Guozhang Wang  wrote:
> 
>> +1 (binding), checked release note, java doc, and ran unit tests on source
>> tgz.
>>
>>
>> Guozhang
>>
>> On Mon, Jun 25, 2018 at 8:05 PM Manikumar 
>> wrote:
>>
>>> +1 (non-binding) Verified tests, quick start, producer/consumer perf
>> tests.
>>>
>>> On Sat, Jun 23, 2018 at 2:25 AM Ted Yu  wrote:
>>>
 +1

 Ran test suite.

 Checked signatures.

 On Fri, Jun 22, 2018 at 11:42 AM, Vahid S Hashemian <
 vahidhashem...@us.ibm.com> wrote:

> +1 (non-binding)
>
> Built from source and ran quickstart successfully on Ubuntu (with
>> Java
 8).
>
> Thanks for running the release Matthias!
> --Vahid
>
>
>
>
> From:   "Matthias J. Sax" 
> To: d...@kafka.apache.org, users@kafka.apache.org,
> kafka-clie...@googlegroups.com
> Date:   06/22/2018 10:42 AM
> Subject:[VOTE] 1.0.2 RC0
>
>
>
> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 1.0.2.
>
> This is a bug fix release closing 26 tickets:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.0.2
>
> Release notes for the 1.0.2 release:
> http://home.apache.org/~mjsax/kafka-1.0.2-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so
>>> we
> can close the vote on Wednesday.
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~mjsax/kafka-1.0.2-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~mjsax/kafka-1.0.2-rc0/javadoc/
>
> * Tag to be voted upon (off 1.0 branch) is the 1.0.2 tag:
> https://github.com/apache/kafka/releases/tag/1.0.2-rc0
>
> * Documentation:
> http://kafka.apache.org/10/documentation.html
>
> * Protocol:
> http://kafka.apache.org/10/protocol.html
>
> * Successful Jenkins builds for the 1.0 branch:
> Unit/integration tests:
 https://builds.apache.org/job/kafka-1.0-jdk7/211/
> System tests:
> https://jenkins.confluent.io/job/system-test-kafka/job/1.0/217/
>
> /**
>
> Thanks,
>   -Matthias
>
> [attachment "signature.asc" deleted by Vahid S Hashemian/Silicon
> Valley/IBM]
>
>
>
>

>>>
>>
>>
>> --
>> -- Guozhang
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: Subscribe to mailing list

2018-06-29 Thread Ted Yu
Please see https://kafka.apache.org/contact

On Fri, Jun 29, 2018 at 10:54 AM, chinchu chinchu 
wrote:

>
>


Re: [VOTE] 0.10.2.2 RC1

2018-06-29 Thread Ted Yu
+1

Ran test suite.

Checked signatures.

On Fri, Jun 29, 2018 at 10:21 AM, Jason Gustafson 
wrote:

> +1 (binding). I checked release notes, documentation, and went through the
> quickstart.
>
> Thanks Matthias!
>
> On Fri, Jun 22, 2018 at 6:43 PM, Matthias J. Sax 
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the second candidate for release of Apache Kafka 0.10.2.2.
> >
> > Note, that RC0 was created before the upgrade to Gradle 4.8.1 and thus,
> > we discarded it in favor of RC1 (without sending out a email for RC0).
> >
> > This is a bug fix release closing 29 tickets:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.2.2
> >
> > Release notes for the 0.10.2.2 release:
> > http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
> > can close the vote on Wednesday.
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/javadoc/
> >
> > * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.2 tag:
> > https://github.com/apache/kafka/releases/tag/0.10.2.2-rc1
> >
> > * Documentation:
> > http://kafka.apache.org/0102/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/0102/protocol.html
> >
> > * Successful Jenkins builds for the 0.10.2 branch:
> > Unit/integration tests: https://builds.apache.org/job/
> > kafka-0.10.2-jdk7/220/
> >
> > /**
> >
> > Thanks,
> >   -Matthias
> >
> >
>


Visible intermediate state for KGroupedTable aggregation

2018-06-29 Thread Thilo-Alexander Ginkel
Hello everyone,

I have implemented a Kafka Streams service using the Streams DSL.
Within the topology I am using A KGroupedTable, on which I perform an
aggregate using an adder and subtractor. AFAICS (at least when using
TopologyTestDriver) the intermediate state created by the subtractor
is pushed downstream as an update followed by another update after the
adder has been called.

Is there a way to reliably suppress publishing of this intermediate
state (which is inconsistent from a business point of view in my
case)?

The docs indicate this, but this does not sound like a guarantee ;-):

-- 8< --
Not all updates might get sent downstream, as an internal cache is
used to deduplicate consecutive updates to the same key. The rate of
propagated updates depends on your input data rate, the number of
distinct keys, the number of parallel running Kafka Streams instances,
and the configuration parameters for cache size, and commit intervall.
-- 8< --

Thanks & kind regards
Thilo


Rebalance group issue

2018-06-29 Thread Arunkumar
Hi All,
I have a cluster of 3 nodes. Everything was good when started. When we deleted 
a topic(including folders in kafka brokers and zookeeper) restarted brokers and 
create topics. Now I see below error on 2 of the leaders which keeps coming 
every other second on server.logs.
I have 1 partition with 3 replicas. Can someone help me to understand what is 
going on and how to solve this issue. Appreciate your time.

[2018-Hi 06-29 13:34:33,122] INFO [GroupCoordinator 9]: Preparing to rebalance 
group ** with old generation 6030 (__consumer_offsets-29) 
(kafka.coordinator.group.GroupCoordinator)[2018-06-29 13:34:33,122] INFO 
[GroupCoordinator 9]: Stabilized group * generation 6031 
(__consumer_offsets-29) (kafka.coordinator.group.GroupCoordinator)[2018-06-29 
13:34:33,123] INFO [GroupCoordinator 9]: Assignment received from leader for 
group  for generation 6031 
(kafka.coordinator.group.GroupCoordinator)[2018-06-29 13:34:34,238] INFO 
[GroupCoordinator 9]: Preparing to rebalance group *** with old 
generation 6031 (__consumer_offsets-29) 
(kafka.coordinator.group.GroupCoordinator)[2018-06-29 13:34:34,238] INFO 
[GroupCoordinator 9]: Group  with generation 6032 is now empty 
(__consumer_offsets-29) (kafka.coordinator.group.GroupCoordinator)

ThanksArunkumar Pichaimuthu, PMP

Kafka cannot connect to broker

2018-06-29 Thread Coluccio, Stephen
Hey Everyone,

I have been experiencing some problems and have not been able to find 
sufficient documentation online to help me figure out the root cuase. I have a 
3 node kafka cluster set up on 3 different Windows 2012 servers. The cluster 
runs perfectly fine, ingesting data that is being sent via filebeat and 
winlogbeat (Elastic log collectors). I then have my logstash cluster configured 
to consume the data from Kafka. But pretty frequently, the data will stop 
coming in to my Logstash cluster. When I go to check on Kafak, it looks like 
something keeps preventing it from reaching the broker. When I run " 
.\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic 
winlogbeat" I get the following message.

[2018-06-29 13:48:13,921] WARN [Consumer clientId=consumer-1, 
groupId=console-consumer-34649] Connection to node -1 coul d not be 
established. Broker may not be available. 
(org.apache.kafka.clients.NetworkClient)
[2018-06-29 13:48:14,999] WARN [Consumer clientId=consumer-1, 
groupId=console-consumer-34649] Connection to node -1 coul d not be 
established. Broker may not be available. 
(org.apache.kafka.clients.NetworkClient)

The only way to fix it is to stop Kafka, remove all of the _consumer_offset 
files in C:\kafka\var\data and then starting Kafka again.

I see these errors in the Zookeeper logs:
[2018-06-25 04:54:12,305] WARN Interrupted while waiting for message on queue 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
java.lang.InterruptedException
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)
at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418)
at 
org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1097)
at 
org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:74)
at 
org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:932)
[2018-06-25 04:54:12,305] WARN Send worker leaving thread 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 04:55:19,336] INFO Received connection request /10.1.17.93:57654 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 04:55:23,711] WARN Exception reading or writing challenge: 
java.net.SocketException: Connection reset 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 04:55:23,711] INFO Received connection request /10.1.17.93:57655 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 04:55:23,711] WARN Exception reading or writing challenge: 
java.net.SocketException: Connection reset 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 05:03:02,134] INFO Received connection request /10.1.17.93:59024 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)

And I see these errors on Kafka:
log4j:ERROR Failed to rename [/logs/server.log] to 
[/logs/server.log.2018-06-25-13].
log4j:ERROR Failed to rename [/logs/log-cleaner.log] to 
[/logs/log-cleaner.log.2018-06-25-13].
log4j:ERROR Failed to rename [/logs/server.log] to 
[/logs/server.log.2018-06-25-14].
log4j:ERROR Failed to rename [/logs/log-cleaner.log] to 
[/logs/log-cleaner.log.2018-06-25-14].
log4j:ERROR Failed to rename [/logs/server.log] to 
[/logs/server.log.2018-06-25-15].

Can anyone help me out?



The information contained in this email message is intended only for use of the 
individual or entity named above. If the reader of this message is not the 
intended recipient, or the employee or agent responsible to deliver it to the 
intended recipient, you are hereby notified that any dissemination, 
distribution or copying of this communication is strictly prohibited. If you 
have received this communication in error, please immediately notify us by 
email, postmas...@weil.com, and destroy the original message. Thank you.


RE: [VOTE] 0.10.2.2 RC1

2018-06-29 Thread Coluccio, Stephen
Sorry everyone, I meant to change the subject of this email but forgot to 
before I sent it. I did not mean to hijack this thread. I apologize for the 
interruption.

-Steve

-Original Message-
From: Coluccio, Stephen
Sent: Friday, June 29, 2018 2:05 PM
To: Kafka Users 
Cc: dev ; kafka-clients 
Subject: RE: [VOTE] 0.10.2.2 RC1

Hey Everyone,

I have been experiencing some problems and have not been able to find 
sufficient documentation online to help me figure out the root cuase. I have a 
3 node kafka cluster set up on 3 different Windows 2012 servers. The cluster 
runs perfectly fine, ingesting data that is being sent via filebeat and 
winlogbeat (Elastic log collectors). I then have my logstash cluster configured 
to consume the data from Kafka. But pretty frequently, the data will stop 
coming in to my Logstash cluster. When I go to check on Kafak, it looks like 
something keeps preventing it from reaching the broker. When I run " 
.\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic 
winlogbeat" I get the following message.

[2018-06-29 13:48:13,921] WARN [Consumer clientId=consumer-1, 
groupId=console-consumer-34649] Connection to node -1 coul d not be 
established. Broker may not be available. 
(org.apache.kafka.clients.NetworkClient)
[2018-06-29 13:48:14,999] WARN [Consumer clientId=consumer-1, 
groupId=console-consumer-34649] Connection to node -1 coul d not be 
established. Broker may not be available. 
(org.apache.kafka.clients.NetworkClient)

The only way to fix it is to stop Kafka, remove all of the _consumer_offset 
files in C:\kafka\var\data and then starting Kafka again.

I see these errors in the Zookeeper logs:
[2018-06-25 04:54:12,305] WARN Interrupted while waiting for message on queue 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
java.lang.InterruptedException
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)
at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418)
at 
org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1097)
at 
org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:74)
at 
org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:932)
[2018-06-25 04:54:12,305] WARN Send worker leaving thread 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 04:55:19,336] INFO Received connection request /10.1.17.93:57654 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 04:55:23,711] WARN Exception reading or writing challenge: 
java.net.SocketException: Connection reset 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 04:55:23,711] INFO Received connection request /10.1.17.93:57655 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 04:55:23,711] WARN Exception reading or writing challenge: 
java.net.SocketException: Connection reset 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 05:03:02,134] INFO Received connection request /10.1.17.93:59024 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)

And I see these errors on Kafka:
log4j:ERROR Failed to rename [/logs/server.log] to 
[/logs/server.log.2018-06-25-13].
log4j:ERROR Failed to rename [/logs/log-cleaner.log] to 
[/logs/log-cleaner.log.2018-06-25-13].
log4j:ERROR Failed to rename [/logs/server.log] to 
[/logs/server.log.2018-06-25-14].
log4j:ERROR Failed to rename [/logs/log-cleaner.log] to 
[/logs/log-cleaner.log.2018-06-25-14].
log4j:ERROR Failed to rename [/logs/server.log] to 
[/logs/server.log.2018-06-25-15].

Can anyone help me out?

-Steve



The information contained in this email message is intended only for use of the 
individual or entity named above. If the reader of this message is not the 
intended recipient, or the employee or agent responsible to deliver it to the 
intended recipient, you are hereby notified that any dissemination, 
distribution or copying of this communication is strictly prohibited. If you 
have received this communication in error, please immediately notify us by 
email, postmas...@weil.com, and destroy the original message. Thank you.


Re: [VOTE] 0.10.2.2 RC1

2018-06-29 Thread Guozhang Wang
+1 (binding). Verified java docs, and ran the unit tests with the source
tgz.


Guozhang

On Fri, Jun 29, 2018 at 10:22 AM Jason Gustafson  wrote:

> +1 (binding). I checked release notes, documentation, and went through the
> quickstart.
>
> Thanks Matthias!
>
> On Fri, Jun 22, 2018 at 6:43 PM, Matthias J. Sax 
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the second candidate for release of Apache Kafka 0.10.2.2.
> >
> > Note, that RC0 was created before the upgrade to Gradle 4.8.1 and thus,
> > we discarded it in favor of RC1 (without sending out a email for RC0).
> >
> > This is a bug fix release closing 29 tickets:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.2.2
> >
> > Release notes for the 0.10.2.2 release:
> > http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
> > can close the vote on Wednesday.
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/javadoc/
> >
> > * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.2 tag:
> > https://github.com/apache/kafka/releases/tag/0.10.2.2-rc1
> >
> > * Documentation:
> > http://kafka.apache.org/0102/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/0102/protocol.html
> >
> > * Successful Jenkins builds for the 0.10.2 branch:
> > Unit/integration tests: https://builds.apache.org/job/
> > kafka-0.10.2-jdk7/220/
> >
> > /**
> >
> > Thanks,
> >   -Matthias
> >
> >
>


-- 
-- Guozhang


Possible bug? Duplicates when searching kafka stream state store with caching

2018-06-29 Thread Christian Henry
Hi all,

I'll first describe a simplified view of relevant parts of our setup (which
should be enough to repro), describe the behavior we're seeing, and then
note some information I've come across after digging in a bit.

We have a kafka stream application, and one of our transform steps keeps a
state store to filter out messages with a previously seen GUID. That is,
our transform looks like:

public KeyValue transform(byte[] key, String guid) {
try (WindowStoreIterator iterator =
duplicateStore.fetch(correlationId, start, now)) {
if (iterator.hasNext()) {
return null;
} else {
duplicateStore.put(correlationId, some metadata);
return new KeyValue<>(key, message);
}
}}

where the duplicateStore is a persistent windowed store with caching
enabled.

I was debugging some tests and found that sometimes when calling
*all()* or *fetchAll()
*on the duplicate store and stepping through the iterator, it would return
the same guid more than once, even if it was only inserted into the store
once. More specifically, if I had the following guids sent to the stream:
[1, 2, ... 9] (for 9 values total), sometimes it would return
10 values, with one (or more) of the values being returned twice by the
iterator. However, this would not show up with a *fetch(guid)* on that
specific guid. For instance, if 1 was being returned twice by
*fetchAll()*, calling *duplicateStore.fetch("1", start, end)* will
still return an iterator with size of 1.

I dug into this a bit more by setting a breakpoint in
*SegmentedCacheFunction#compareSegmentedKeys(cacheKey,
storeKey)* and watching the two input values as I looped through the
iterator using "*while(iterator.hasNext()) { print(iterator.next()) }*". In
one test, the duplicate value was 6, and saw the following behavior
(trimming off the segment values from the byte input):
-- compareSegmentedKeys(cacheKey = 6, storeKey = 2)
-- next() returns 6
and
-- compareSegmentedKeys(cacheKey = 7, storeKey = 6)
-- next() returns 6
Besides those, the input values are the same and the output is as expected.
Additionally, a coworker noted that the number of duplicates always matches
the number of times *Long.compare(cacheSegmentId, storeSegmentId) *returns
a non-zero value, indicating that duplicates are likely arising due to the
segment comparison.


[VOTE] 2.0.0 RC1

2018-06-29 Thread Rajini Sivaram
Hello Kafka users, developers and client-developers,


This is the second candidate for release of Apache Kafka 2.0.0.


This is a major version release of Apache Kafka. It includes 40 new  KIPs
and

several critical bug fixes. Please see the 2.0.0 release plan for more
details:

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820


A few notable highlights:

   - Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
   (KIP-277)
   - SASL/OAUTHBEARER implementation (KIP-255)
   - Improved quota communication and customization of quotas (KIP-219,
   KIP-257)
   - Efficient memory usage for down conversion (KIP-283)
   - Fix log divergence between leader and follower during fast leader
   failover (KIP-279)
   - Drop support for Java 7 and remove deprecated code including old scala
   clients
   - Connect REST extension plugin, support for externalizing secrets and
   improved error handling (KIP-285, KIP-297, KIP-298 etc.)
   - Scala API for Kafka Streams and other Streams API improvements
   (KIP-270, KIP-150, KIP-245, KIP-251 etc.)

Release notes for the 2.0.0 release:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/RELEASE_NOTES.html



*** Please download, test and vote by Tuesday, July 3rd, 4pm PT


Kafka's KEYS file containing PGP keys we use to sign the release:

http://kafka.apache.org/KEYS


* Release artifacts to be voted upon (source and binary):

http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/


* Maven artifacts to be voted upon:

https://repository.apache.org/content/groups/staging/


* Javadoc:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/javadoc/


* Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:

https://github.com/apache/kafka/tree/2.0.0-rc1


* Documentation:

http://kafka.apache.org/20/documentation.html


* Protocol:

http://kafka.apache.org/20/protocol.html


* Successful Jenkins builds for the 2.0 branch:

Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/66/

System tests: https://jenkins.confluent.io/job/system-test-kafka/job/2.0/15/



Please test and verify the release artifacts and submit a vote for this RC
or report any issues so that we can fix them and roll out a new RC ASAP!

Although this release vote requires PMC votes to pass, testing, votes, and
bug
reports are valuable and appreciated from everyone.


Thanks,


Rajini


RE: [VOTE] 0.10.2.2 RC1

2018-06-29 Thread Coluccio, Stephen
Hey Everyone,

I have been experiencing some problems and have not been able to find 
sufficient documentation online to help me figure out the root cuase. I have a 
3 node kafka cluster set up on 3 different Windows 2012 servers. The cluster 
runs perfectly fine, ingesting data that is being sent via filebeat and 
winlogbeat (Elastic log collectors). I then have my logstash cluster configured 
to consume the data from Kafka. But pretty frequently, the data will stop 
coming in to my Logstash cluster. When I go to check on Kafak, it looks like 
something keeps preventing it from reaching the broker. When I run " 
.\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic 
winlogbeat" I get the following message.

[2018-06-29 13:48:13,921] WARN [Consumer clientId=consumer-1, 
groupId=console-consumer-34649] Connection to node -1 coul
d not be established. Broker may not be available. 
(org.apache.kafka.clients.NetworkClient)
[2018-06-29 13:48:14,999] WARN [Consumer clientId=consumer-1, 
groupId=console-consumer-34649] Connection to node -1 coul
d not be established. Broker may not be available. 
(org.apache.kafka.clients.NetworkClient)

The only way to fix it is to stop Kafka, remove all of the _consumer_offset 
files in C:\kafka\var\data and then starting Kafka again.

I see these errors in the Zookeeper logs:
[2018-06-25 04:54:12,305] WARN Interrupted while waiting for message on queue 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
java.lang.InterruptedException
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)
at java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:418)
at 
org.apache.zookeeper.server.quorum.QuorumCnxManager.pollSendQueue(QuorumCnxManager.java:1097)
at 
org.apache.zookeeper.server.quorum.QuorumCnxManager.access$700(QuorumCnxManager.java:74)
at 
org.apache.zookeeper.server.quorum.QuorumCnxManager$SendWorker.run(QuorumCnxManager.java:932)
[2018-06-25 04:54:12,305] WARN Send worker leaving thread 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 04:55:19,336] INFO Received connection request /10.1.17.93:57654 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 04:55:23,711] WARN Exception reading or writing challenge: 
java.net.SocketException: Connection reset 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 04:55:23,711] INFO Received connection request /10.1.17.93:57655 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 04:55:23,711] WARN Exception reading or writing challenge: 
java.net.SocketException: Connection reset 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
[2018-06-25 05:03:02,134] INFO Received connection request /10.1.17.93:59024 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)

And I see these errors on Kafka:
log4j:ERROR Failed to rename [/logs/server.log] to 
[/logs/server.log.2018-06-25-13].
log4j:ERROR Failed to rename [/logs/log-cleaner.log] to 
[/logs/log-cleaner.log.2018-06-25-13].
log4j:ERROR Failed to rename [/logs/server.log] to 
[/logs/server.log.2018-06-25-14].
log4j:ERROR Failed to rename [/logs/log-cleaner.log] to 
[/logs/log-cleaner.log.2018-06-25-14].
log4j:ERROR Failed to rename [/logs/server.log] to 
[/logs/server.log.2018-06-25-15].

Can anyone help me out?

-Steve



The information contained in this email message is intended only for use of the 
individual or entity named above. If the reader of this message is not the 
intended recipient, or the employee or agent responsible to deliver it to the 
intended recipient, you are hereby notified that any dissemination, 
distribution or copying of this communication is strictly prohibited. If you 
have received this communication in error, please immediately notify us by 
email, postmas...@weil.com, and destroy the original message. Thank you.


Subscribe to mailing list

2018-06-29 Thread chinchu chinchu



Partitions reassignment is failing in Kafka 1.1.0

2018-06-29 Thread Debraj Manna
Hi

I altered a topic like below in kafka 1.1.0

/home/ubuntu/deploy/kafka/bin/kafka-topics.sh --zookeeper 127.0.0.1:2181
--alter --topic Topic3 --config min.insync.replicas=2

But whenever I am trying to verify the reassignment it is showing the below
exception

/home/ubuntu/deploy/kafka/bin/kafka-reassign-partitions.sh --zookeeper
127.0.0.1:2181 --reassignment-json-file
/home/ubuntu/deploy/kafka/reassignment.json --verify

Partitions reassignment failed due to Size of replicas list Vector(3,
0, 2) is different from size of log dirs list Vector(any) for
partition Topic3-7
kafka.common.AdminCommandFailedException: Size of replicas list
Vector(3, 0, 2) is different from size of log dirs list Vector(any)
for partition Topic3-7
at 
kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1$$anonfun$apply$4$$anonfun$apply$5.apply(ReassignPartitionsCommand.scala:262)
at 
kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1$$anonfun$apply$4$$anonfun$apply$5.apply(ReassignPartitionsCommand.scala:251)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at 
kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1$$anonfun$apply$4.apply(ReassignPartitionsCommand.scala:251)
at 
kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1$$anonfun$apply$4.apply(ReassignPartitionsCommand.scala:250)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1.apply(ReassignPartitionsCommand.scala:250)
at 
kafka.admin.ReassignPartitionsCommand$$anonfun$parsePartitionReassignmentData$1.apply(ReassignPartitionsCommand.scala:249)
at scala.collection.immutable.List.foreach(List.scala:392)
at 
kafka.admin.ReassignPartitionsCommand$.parsePartitionReassignmentData(ReassignPartitionsCommand.scala:249)
at 
kafka.admin.ReassignPartitionsCommand$.verifyAssignment(ReassignPartitionsCommand.scala:90)
at 
kafka.admin.ReassignPartitionsCommand$.verifyAssignment(ReassignPartitionsCommand.scala:84)
at 
kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:58)
at 
kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)


My reassignment.json & server.properties is attached. Same thing used to
work fine in kafka 0.10. Can someone let me what is going wrong? Is
anything changed related to this in kafka 1.1.0 ?


reassignment.json
Description: application/json


Re: [VOTE] 0.10.2.2 RC1

2018-06-29 Thread Jason Gustafson
+1 (binding). I checked release notes, documentation, and went through the
quickstart.

Thanks Matthias!

On Fri, Jun 22, 2018 at 6:43 PM, Matthias J. Sax 
wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the second candidate for release of Apache Kafka 0.10.2.2.
>
> Note, that RC0 was created before the upgrade to Gradle 4.8.1 and thus,
> we discarded it in favor of RC1 (without sending out a email for RC0).
>
> This is a bug fix release closing 29 tickets:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.10.2.2
>
> Release notes for the 0.10.2.2 release:
> http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
> can close the vote on Wednesday.
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Javadoc:
> http://home.apache.org/~mjsax/kafka-0.10.2.2-rc1/javadoc/
>
> * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.2 tag:
> https://github.com/apache/kafka/releases/tag/0.10.2.2-rc1
>
> * Documentation:
> http://kafka.apache.org/0102/documentation.html
>
> * Protocol:
> http://kafka.apache.org/0102/protocol.html
>
> * Successful Jenkins builds for the 0.10.2 branch:
> Unit/integration tests: https://builds.apache.org/job/
> kafka-0.10.2-jdk7/220/
>
> /**
>
> Thanks,
>   -Matthias
>
>