The Apache Kafka community is pleased to announce the release for Apache Kafka
1.0.1.

This is a bugfix release for the 1.0 branch that was first released with 1.0.0 
about 4 months ago. We've fixed 49 issues since that release. Most of these are 
non-critical, but in aggregate these fixes will have significant impact. A few 
of the more significant fixes include:

* KAFKA-6277: Make loadClass thread-safe for class loaders of Connect plugins
* KAFKA-6185: Selector memory leak with high likelihood of OOM in case of down 
conversion
* KAFKA-6269: KTable state restore fails after rebalance
* KAFKA-6190: GlobalKTable never finishes restoring when consuming 
transactional messages
* KAFKA-6529: Stop file descriptor leak when client disconnects with staged 
receives
* KAFKA-6238: Issues with protocol version when applying a rolling upgrade to 
1.0.0


All of the changes in this release can be found in the release notes:


https://dist.apache.org/repos/dist/release/kafka/1.0.1/RELEASE_NOTES.html



You can download the source release from:


https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.1/kafka-1.0.1-src.tgz


and binary releases from:


https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.1/kafka_2.11-1.0.1.tgz
(Scala 2.11)

https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.1/kafka_2.12-1.0.1.tgz
(Scala 2.12)

---------------------------------------------------------------------------------------------------


Apache Kafka is a distributed streaming platform with four core APIs:


** The Producer API allows an application to publish a stream records to
one or more Kafka topics.


** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.


** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an output
stream to one or more output topics, effectively transforming the input
streams to output streams.


** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might capture
every change to a table.three key capabilities:



With these APIs, Kafka can be used for two broad classes of application:


** Building real-time streaming data pipelines that reliably get data
between systems or applications.


** Building real-time streaming applications that transform or react to the
streams of data.



Apache Kafka is in use at large and small companies worldwide, including 
Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank, 
Target, The New York Times, Uber, Yelp, and Zalando, among others.



A big thank you for the following 36 contributors to this release!

Alex Good, Andras Beni, Andy Bryant, Arjun Satish, Bill Bejeck, Colin P. 
Mccabe, Colin Patrick McCabe, ConcurrencyPractitioner, Damian Guy, Daniel 
Wojda, Dong Lin, Edoardo Comar, Ewen Cheslack-Postava, Filipe Agapito, fredfp, 
Guozhang Wang, huxihx, Ismael Juma, Jason Gustafson, Jeremy Custenborder, 
Jiangjie (Becket) Qin, Joel Hamill, Konstantine Karantasis, lisa2lisa, Logan 
Buckley, Manjula K, Matthias J. Sax, Nick Chiu, parafiend, Rajini Sivaram, 
Randall Hauch, Robert Yokota, Ron Dagostino, tedyu, Yaswanth Kumar, Yu.


We welcome your help and feedback. For more information on how to
report problems,
and to get involved, visit the project website at http://kafka.apache.org/


Thank you!
Ewen

Reply via email to