Re: [VOTE] 2.4.0 RC2

2019-12-01 Thread Ismael Juma
We have a pull request for KAFKA-9156, which fixes a critical regression
introduced in 2.3.0. I think we should include this in the next (and
hopefully final) 2.4.0 RC.

Ismael

On Sat, Nov 30, 2019 at 12:31 PM John Roesler  wrote:

> Hello all,
>
> I hate to do this, but I've been running a soak test of Kafka Streams on
> 2.4 and uncovered a number of small bugs that lead to Streams losing
> threads until there are none left. This condition only arises when using
> EOS in a highly flaky networking environment.
>
> I've included a lot of details in KAFKA-9231, in particular my last
> comment:
>
> https://issues.apache.org/jira/browse/KAFKA-9231?focusedCommentId=16985429&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16985429
>
> Please take a look and see if you agree whether it warrants cutting a new
> RC early next week once the fix is merged.
>
> Thanks,
> -John
>
> On Sat, Nov 30, 2019, at 11:48 AM, Manikumar wrote:
> > Hello Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 2.4.0.
> >
> > This release includes many new features, including:
> > - Allow consumers to fetch from closest replica
> > - Support for incremental cooperative rebalancing to the consumer
> rebalance
> > protocol
> > - MirrorMaker 2.0 (MM2), a new multi-cluster, cross-datacenter
> replication
> > engine
> > - New Java authorizer Interface
> > - Support for non-key joining in KTable
> > - Administrative API for replica reassignment
> > - Sticky partitioner
> > - Return topic metadata and configs in CreateTopics response
> > - Securing Internal connect REST endpoints
> > - API to delete consumer offsets and expose it via the AdminClient.
> >
> > Release notes for the 2.4.0 release:
> > https://home.apache.org/~manikumar/kafka-2.4.0-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Thursday, December 5th, 9am PT
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~manikumar/kafka-2.4.0-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~manikumar/kafka-2.4.0-rc2/javadoc/
> >
> > * Tag to be voted upon (off 2.4 branch) is the 2.4.0 tag:
> > https://github.com/apache/kafka/releases/tag/2.4.0-rc2
> >
> > * Documentation:
> > https://kafka.apache.org/24/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/24/protocol.html
> >
> > Thanks,
> > Manikumar
> >
>


[jira] [Resolved] (KAFKA-9213) BufferOverflowException on rolling new segment after upgrading Kafka from 1.1.0 to 2.3.1

2019-12-01 Thread Ismael Juma (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-9213.

Resolution: Duplicate

Duplicate of KAFKA-9156.

> BufferOverflowException on rolling new segment after upgrading Kafka from 
> 1.1.0 to 2.3.1
> 
>
> Key: KAFKA-9213
> URL: https://issues.apache.org/jira/browse/KAFKA-9213
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Affects Versions: 2.3.1
> Environment: Ubuntu 16.04, AWS instance d2.8xlarge.
> JAVA Options:
> -Xms16G 
> -Xmx16G 
> -XX:G1HeapRegionSize=16M 
> -XX:MetaspaceSize=96m 
> -XX:MinMetaspaceFreeRatio=50 
>Reporter: Daniyar
>Priority: Blocker
>
> We updated our Kafka cluster from 1.1.0 version to 2.3.1. We followed up to 
> step 2 of the [update 
> instruction|[https://kafka.apache.org/documentation/#upgrade]].
> Message format and inter-broker protocol versions were left the same:
> inter.broker.protocol.version=1.1
> log.message.format.version=1.1
>  
> After upgrading, we started to get some occasional exceptions:
> {code:java}
> 2019/11/19 05:30:53 INFO [ProducerStateManager
> partition=matchmaker_retry_clicks_15m-2] Writing producer snapshot at
> offset 788532 (kafka.log.ProducerStateManager)
> 2019/11/19 05:30:53 INFO [Log partition=matchmaker_retry_clicks_15m-2,
> dir=/mnt/kafka] Rolled new log segment at offset 788532 in 1 ms.
> (kafka.log.Log)
> 2019/11/19 05:31:01 ERROR [ReplicaManager broker=0] Error processing append
> operation on partition matchmaker_retry_clicks_15m-2
> (kafka.server.ReplicaManager)
> 2019/11/19 05:31:01 java.nio.BufferOverflowException
> 2019/11/19 05:31:01 at java.nio.Buffer.nextPutIndex(Buffer.java:527)
> 2019/11/19 05:31:01 at
> java.nio.DirectByteBuffer.putLong(DirectByteBuffer.java:797)
> 2019/11/19 05:31:01 at
> kafka.log.TimeIndex.$anonfun$maybeAppend$1(TimeIndex.scala:134)
> 2019/11/19 05:31:01 at
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> 2019/11/19 05:31:01 at
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
> 2019/11/19 05:31:01 at
> kafka.log.TimeIndex.maybeAppend(TimeIndex.scala:114)
> 2019/11/19 05:31:01 at
> kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:520)
> 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$roll$8(Log.scala:1690)
> 2019/11/19 05:31:01 at
> kafka.log.Log.$anonfun$roll$8$adapted(Log.scala:1690)
> 2019/11/19 05:31:01 at scala.Option.foreach(Option.scala:407)
> 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$roll$2(Log.scala:1690)
> 2019/11/19 05:31:01 at
> kafka.log.Log.maybeHandleIOException(Log.scala:2085)
> 2019/11/19 05:31:01 at kafka.log.Log.roll(Log.scala:1654)
> 2019/11/19 05:31:01 at kafka.log.Log.maybeRoll(Log.scala:1639)
> 2019/11/19 05:31:01 at kafka.log.Log.$anonfun$append$2(Log.scala:966)
> 2019/11/19 05:31:01 at
> kafka.log.Log.maybeHandleIOException(Log.scala:2085)
> 2019/11/19 05:31:01 at kafka.log.Log.append(Log.scala:850)
> 2019/11/19 05:31:01 at kafka.log.Log.appendAsLeader(Log.scala:819)
> 2019/11/19 05:31:01 at
> kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:772)
> 2019/11/19 05:31:01 at
> kafka.utils.CoreUtils$.inLock(CoreUtils.scala:253)
> 2019/11/19 05:31:01 at
> kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:259)
> 2019/11/19 05:31:01 at
> kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:759)
> 2019/11/19 05:31:01 at
> kafka.server.ReplicaManager.$anonfun$appendToLocalLog$2(ReplicaManager.scala:763)
> 2019/11/19 05:31:01 at
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashMap.$anonfun$foreach$1(HashMap.scala:149)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashTable.foreachEntry(HashTable.scala:237)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashTable.foreachEntry$(HashTable.scala:230)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:44)
> 2019/11/19 05:31:01 at
> scala.collection.mutable.HashMap.foreach(HashMap.scala:149)
> 2019/11/19 05:31:01 at
> scala.collection.TraversableLike.map(TraversableLike.scala:238)
> 2019/11/19 05:31:01 at
> scala.collection.TraversableLike.map$(TraversableLike.scala:231)
> 2019/11/19 05:31:01 at
> scala.collection.AbstractTraversable.map(Traversable.scala:108)
> 2019/11/19 05:31:01 at
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:751)
> 2019/11/19 05:31:01 at
> kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:492)
> 2019/11/19 05:31:01 at
> kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:544)
> 2019/11/19 05:31:01 at
> kafka.server

Re: [kafka-clients] [ANNOUNCE] Apache Kafka 2.2.2

2019-12-01 Thread Vahid Hashemian
Awesome. Thanks for managing this release Randall!

Regards,
--Vahid

On Sun, Dec 1, 2019 at 5:45 PM Randall Hauch  wrote:

> The Apache Kafka community is pleased to announce the release for Apache
> Kafka 2.2.2
>
> This is a bugfix release for Apache Kafka 2.2.
> All of the changes in this release can be found in the release notes:
> https://www.apache.org/dist/kafka/2.2.2/RELEASE_NOTES.html
>
> You can download the source and binary release from:
> https://kafka.apache.org/downloads#2.2.2
>
>
> ---
>
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
>
> ** The Producer API allows an application to publish a stream records to
> one or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an
> output stream to one or more output topics, effectively transforming the
> input streams to output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might
> capture every change to a table.
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
> ** Building real-time streaming applications that transform or react
> to the streams of data.
>
>
> Apache Kafka is in use at large and small companies worldwide, including
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>
> A big thank you for the following 41 contributors to this release!
>
> A. Sophie Blee-Goldman, Matthias J. Sax, Bill Bejeck, Jason Gustafson,
> Chris Egerton, Boyang Chen, Alex Diachenko, cpettitt-confluent, Magesh
> Nandakumar, Randall Hauch, Ismael Juma, John Roesler, Konstantine
> Karantasis, Mickael Maison, Nacho Muñoz Gómez, Nigel Liang, Paul, Rajini
> Sivaram, Robert Yokota, Stanislav Kozlovski, Vahid Hashemian, Victoria
> Bialas, cadonna, cwildman, mjarvie, sdreynolds, slim, vinoth chandar,
> wenhoujx, Arjun Satish, Chia-Ping Tsai, Colin P. Mccabe, David Arthur,
> Dhruvil Shah, Greg Harris, Gunnar Morling, Hai-Dang Dam, Lifei Chen, Lucas
> Bradstreet, Manikumar Reddy, Michał Borowiecki
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> https://kafka.apache.org/
>
> Thank you!
>
>
> Regards,
> Randall Hauch
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CALYgK0EsNFakX7F0FDkXvMNmUe8g8w-GNRM7EJjD9CJLK7sn0A%40mail.gmail.com
> 
> .
>


-- 

Thanks!
--Vahid


[ANNOUNCE] Apache Kafka 2.2.2

2019-12-01 Thread Randall Hauch
The Apache Kafka community is pleased to announce the release for Apache
Kafka 2.2.2

This is a bugfix release for Apache Kafka 2.2.
All of the changes in this release can be found in the release notes:
https://www.apache.org/dist/kafka/2.2.2/RELEASE_NOTES.html

You can download the source and binary release from:
https://kafka.apache.org/downloads#2.2.2

---


Apache Kafka is a distributed streaming platform with four core APIs:


** The Producer API allows an application to publish a stream records to
one or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an
output stream to one or more output topics, effectively transforming the
input streams to output streams.

** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might
capture every change to a table.


With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between systems or applications.

** Building real-time streaming applications that transform or react
to the streams of data.


Apache Kafka is in use at large and small companies worldwide, including
Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
Target, The New York Times, Uber, Yelp, and Zalando, among others.

A big thank you for the following 41 contributors to this release!

A. Sophie Blee-Goldman, Matthias J. Sax, Bill Bejeck, Jason Gustafson,
Chris Egerton, Boyang Chen, Alex Diachenko, cpettitt-confluent, Magesh
Nandakumar, Randall Hauch, Ismael Juma, John Roesler, Konstantine
Karantasis, Mickael Maison, Nacho Muñoz Gómez, Nigel Liang, Paul, Rajini
Sivaram, Robert Yokota, Stanislav Kozlovski, Vahid Hashemian, Victoria
Bialas, cadonna, cwildman, mjarvie, sdreynolds, slim, vinoth chandar,
wenhoujx, Arjun Satish, Chia-Ping Tsai, Colin P. Mccabe, David Arthur,
Dhruvil Shah, Greg Harris, Gunnar Morling, Hai-Dang Dam, Lifei Chen, Lucas
Bradstreet, Manikumar Reddy, Michał Borowiecki

We welcome your help and feedback. For more information on how to
report problems, and to get involved, visit the project website at
https://kafka.apache.org/

Thank you!


Regards,
Randall Hauch


Jenkins build is back to normal : kafka-2.2-jdk8 #9

2019-12-01 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-9255) MessageSet v1 protocol wrong specification

2019-12-01 Thread Jira
Fábio Silva created KAFKA-9255:
--

 Summary: MessageSet v1 protocol wrong specification
 Key: KAFKA-9255
 URL: https://issues.apache.org/jira/browse/KAFKA-9255
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Reporter: Fábio Silva


The documentation contains a BNF specification missing the timestamp field on 
'message' field entry.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: kafka-2.2-jdk8-old #190

2019-12-01 Thread Apache Jenkins Server
See 


Changes:

[rhauch] Bump version to 2.2.2

[rhauch] Update versions to 2.2.3-SNAPSHOT


--
Started by an SCM change
Running as SYSTEM
[EnvInject] - Loading node environment variables.
Building remotely on H25 (ubuntu) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress -- https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.2^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.2^{commit} # timeout=10
Checking out Revision 61c8228f314794228e9b2a74b6034c56a5e3836d 
(refs/remotes/origin/2.2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 61c8228f314794228e9b2a74b6034c56a5e3836d
Commit message: "Update versions to 2.2.3-SNAPSHOT"
 > git rev-list --no-walk a9124097e1719b55b204a6f8ef8911d06aabfeee # timeout=10
ERROR: No tool found matching GRADLE_4_8_1_HOME
[kafka-2.2-jdk8-old] $ /bin/bash -xe /tmp/jenkins6344170544137095223.sh
+ rm -rf 
+ /bin/gradle
/tmp/jenkins6344170544137095223.sh: line 4: /bin/gradle: No such file or 
directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
ERROR: No tool found matching GRADLE_4_8_1_HOME
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
No credentials specified
ERROR: No tool found matching GRADLE_4_8_1_HOME
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=61c8228f314794228e9b2a74b6034c56a5e3836d, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #175
Recording test results
ERROR: No tool found matching GRADLE_4_8_1_HOME
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
ERROR: No tool found matching GRADLE_4_8_1_HOME
Not sending mail to unregistered user wangg...@gmail.com
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user b...@confluent.io


Re: [Help] Request contributor permissions

2019-12-01 Thread Manikumar
Hi,

Thanks for your interest. Just added you to the contributors list

Thanks.

On Sun, Dec 1, 2019 at 11:31 AM Kun Song  wrote:

> Hi Kafka committers, I want to be added to the contributors list, my JIRA
> ID is songkun, thank you :)
>


Re: Requesting to be added as a contributor in JIRA

2019-12-01 Thread Manikumar
Hi,

Thanks for your interest. Just added you to the contributors list

Thanks.

On Sun, Dec 1, 2019 at 7:58 PM Balaji Jayasankar <
balaji.jayasan...@gmail.com> wrote:

> Hello,
>
> I would like to be added as a contributor on the JIRA.
> Username: balaji_97_
>
> Regards,
> Balaji
>


Requesting to be added as a contributor in JIRA

2019-12-01 Thread Balaji Jayasankar
Hello,

I would like to be added as a contributor on the JIRA.
Username: balaji_97_

Regards,
Balaji