[ANNOUNCE] Apache Kafka 3.0.0

2021-09-21 Thread Konstantine Karantasis
The Apache Kafka community is pleased to announce the release for Apache
Kafka 3.0.0

It is a major release that includes many new features, including:

* The deprecation of support for Java 8 and Scala 2.12.
* Kafka Raft support for snapshots of the metadata topic and other
improvements in the self-managed quorum.
* Deprecation of message formats v0 and v1.
* Stronger delivery guarantees for the Kafka producer enabled by default.
* Optimizations in OffsetFetch and FindCoordinator requests.
* More flexible MirrorMaker 2 configuration and deprecation of MirrorMaker
1.
* Ability to restart a connector's tasks on a single call in Kafka Connect.
* Connector log contexts and connector client overrides are now enabled by
default.
* Enhanced semantics for timestamp synchronization in Kafka Streams.
* Revamped public API for Stream's TaskId.
* Default serde becomes null in Kafka Streams and several other
configuration changes.

You may read a more detailed list of features in the 3.0.0 blog post:
https://blogs.apache.org/kafka/

All of the changes in this release can be found in the release notes:
https://downloads.apache.org/kafka/3.0.0/RELEASE_NOTES.html

You can download the source and binary release (Scala 2.12 and 2.13) from:
https://kafka.apache.org/downloads#3.0.0

---


Apache Kafka is a distributed streaming platform with four core APIs:


** The Producer API allows an application to publish a stream of records to
one or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an
output stream to one or more output topics, effectively transforming the
input streams to output streams.

** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might
capture every change to a table.


With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between systems or applications.

** Building real-time streaming applications that transform or react
to the streams of data.


Apache Kafka is in use at large and small companies worldwide, including
Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
Target, The New York Times, Uber, Yelp, and Zalando, among others.

A big thank you for the following 141 authors and reviewers to this release!

A. Sophie Blee-Goldman, Adil Houmadi, Akhilesh Dubey, Alec Thomas,
Alexander Iskuskov, Almog Gavra, Alok Nikhil, Alok Thatikunta, Andrew Lee,
Bill Bejeck, Boyang Chen, Bruno Cadonna, CHUN-HAO TANG, Cao Manh Dat, Cheng
Tan, Chia-Ping Tsai, Chris Egerton, Colin P. McCabe, Cong Ding, Daniel
Urban, Daniyar Yeralin, David Arthur, David Christle, David Jacot, David
Mao, David Osvath, Davor Poldrugo, Dejan Stojadinović, Dhruvil Shah, Diego
Erdody, Dong Lin, Dongjoon Hyun, Dániel Urbán, Edoardo Comar, Edwin Hobor,
Eric Beaudet, Ewen Cheslack-Postava, Gardner Vickers, Gasparina Damien,
Geordie, Greg Harris, Gunnar Morling, Guozhang Wang, Gwen (Chen) Shapira,
Ignacio Acuña Frías, Igor Soarez, Ismael Juma, Israel Ekpo, Ivan Ponomarev,
Ivan Yurchenko, Jason Gustafson, Jeff Kim, Jim Galasyn, Jim Hurne, JoelWee,
John Gray, John Roesler, Jorge Esteban Quilcate Otoya, Josep Prat, José
Armando García Sancio, Juan Gonzalez-Zurita, Jun Rao, Justin Mclean,
Justine Olshan, Kahn Cheny, Kalpesh Patel, Kamal Chandraprakash,
Konstantine Karantasis, Kowshik Prakasam, Leah Thomas, Lee Dongjin, Lev
Zemlyanov, Liu Qiang, Lucas Bradstreet, Luke Chen, Manikumar Reddy, Marco
Aurelio Lotz, Matthew de Detrich, Matthias J. Sax, Michael G. Noll, Michael
Noll, Mickael Maison, Nathan Lincoln, Niket Goel, Nikhil Bhatia, Omnia G H
Ibrahim, Peng Lei, Phil Hardwick, Rajini Sivaram, Randall Hauch, Rohan
Desai, Rohit Deshpande, Rohit Sachan, Ron Dagostino, Ryan Dielhenn, Ryanne
Dolan, Sanjana Kaundinya, Sarwar Bhuiyan, Satish Duggana, Scott Hendricks,
Sergio Peña, Shao Yang Hong, Shay Elkin, Stanislav Vodetskyi, Sven Erik
Knop, Tom Bentley, UnityLung, Uwe Eisele, Vahid Hashemian, Valery Kokorev,
Victoria Xia, Viktor Somogyi-Vass, Viswanathan Ranganathan, Vito Jeng,
Walker Carlson, Warren Zhu, Xavier Léauté, YiDing-Duke, Zara Lim, Zhao
Haiyuan, bmaidics, cyc, dengziming, feyman2016, high.lee, iamgd67,
iczellion, ketulgupta1995, lamberken, loboya~, nicolasguyomar,
prince-mahajan, runom, shenwenbing, thomaskwscott, tinawenqiao,
vamossagar12, wenbingshen, wycc, xjin-Confluent, zhaohaidao

We welcome your help and feedback. For more information on how to
report problems, and to get involved, visit the project website at
https://kafka.apache.org/

Re: [VOTE] 3.0.0 RC2

2021-09-15 Thread Konstantine Karantasis
I'm also +1 (binding) given that:

* I ran the release and generated RC2.
* Verified all checksums and signatures.
* Built and installed 3.0.0 RC2 from the source archive and the git tag.
* Spotchecked the Javadocs of RC2.
* Went through the documentation of 3.0.0 after RC2.

I confirm the minor differences in the documentation mentioned by Bill and
Randall and I agree that these can be addressed directly to the docs repo
and added to the source code as a follow-up in the 3.0 branch without
requiring a new RC. I'll do that while promoting RC2 as the official 3.0.0
release.

Konstantine

On Wed, Sep 15, 2021 at 12:34 AM Israel Ekpo  wrote:

> Hi Konstantine,
>
> Thanks for running the release
>
> I ran the following checks:
>
>- PGP Signatures used to sign the release artifacts
>- Validation of Release Artifacts Cryptographic Hashes (ASC MD5 SHA1
>SHA512)
>- Validation of Kafka Source and Tests
>- Validation of Kafka Site Documentation
>- Manual Check of Javadocs
>- Validation of Cluster Setup for KRaft and Legacy Modes
>
> *+1 from me (non-binding)*
>
> To encourage other community members to participate in the release
> candidate validations and voting, I have set up the following resource as
> part of the work for KAFKA-9861
>
> It is a set of scripts and Docker images that allows community members to
> run local validations in a consistent manner.
>
> https://github.com/izzyacademy/apache-kafka-release-party
>
> Please take a look at the resource and share any feedback that you may
> have.
>
> I plan to create a video tutorial that walks community members through how
> it can be used soon. Stay tuned
>
>
>
>
> On Wed, Sep 8, 2021 at 5:59 PM Konstantine Karantasis <
> kkaranta...@apache.org> wrote:
>
> > Hello again Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 3.0.0.
> > It is a major release that includes many new features, including:
> >
> > * The deprecation of support for Java 8 and Scala 2.12.
> > * Kafka Raft support for snapshots of the metadata topic and other
> > improvements in the self-managed quorum.
> > * Deprecation of message formats v0 and v1.
> > * Stronger delivery guarantees for the Kafka producer enabled by default.
> > * Optimizations in OffsetFetch and FindCoordinator requests.
> > * More flexible Mirror Maker 2 configuration and deprecation of Mirror
> > Maker 1.
> > * Ability to restart a connector's tasks on a single call in Kafka
> Connect.
> > * Connector log contexts and connector client overrides are now enabled
> by
> > default.
> > * Enhanced semantics for timestamp synchronization in Kafka Streams.
> > * Revamped public API for Stream's TaskId.
> > * Default serde becomes null in Kafka Streams and several other
> > configuration changes.
> >
> > You may read and review a more detailed list of changes in the 3.0.0 blog
> > post draft here:
> >
> https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache6
> >
> > Release notes for the 3.0.0 release:
> > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Tuesday, September 14, 2021 ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/javadoc/
> >
> > * Tag to be voted upon (off 3.0 branch) is the 3.0.0 tag:
> > https://github.com/apache/kafka/releases/tag/3.0.0-rc2
> >
> > * Documentation:
> > https://kafka.apache.org/30/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/30/protocol.html
> >
> > * Successful Jenkins builds for the 3.0 branch:
> > Unit/integration tests:
> >
> >
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka/detail/3.0/129/
> > (1 flaky test failure)
> > System tests:
> > https://jenkins.confluent.io/job/system-test-kafka/job/3.0/67/
> > (1 flaky test failure)
> >
> > /**
> >
> > Thanks,
> > Konstantine
> >
>


[RESULTS] [VOTE] Release Kafka version 3.0.0

2021-09-15 Thread Konstantine Karantasis
This vote passes with 6 +1 votes (out of which 4 binding votes) and no 0 or
-1 votes.

+1 votes
PMC Members:
* Bill Bejeck
* Colin McCabe
* Randall Hauch
* Konstantine Karantasis

Committers:
* No votes from additional committers

Community:
* Israel Ekpo
* Igor Soarez

0 votes
* No votes

-1 votes
* No votes

Vote thread:

https://lists.apache.org/thread.html/r57230f64abc755b8c0725a78fd479cb6624bdf6771e6df9850e27bdb%40%3Cdev.kafka.apache.org%3E
or
https://www.mail-archive.com/dev@kafka.apache.org/msg120900.html

I will continue with the release process and the release announcement will
follow in the next few days.

Thanks everyone for your vote and confirmation of this release candidate.

Sincerely,
Konstantine Karantasis


On Wed, Sep 15, 2021 at 12:28 PM Konstantine Karantasis <
k.karanta...@gmail.com> wrote:

>
> I'm also +1 (binding) given that:
>
> * I ran the release and generated RC2.
> * Verified all checksums and signatures.
> * Built and installed 3.0.0 RC2 from the source archive and the git tag.
> * Spotchecked the Javadocs of RC2.
> * Went through the documentation of 3.0.0 after RC2.
>
> I confirm the minor differences in the documentation mentioned by Bill and
> Randall and I agree that these can be addressed directly to the docs repo
> and added to the source code as a follow-up in the 3.0 branch without
> requiring a new RC. I'll do that while promoting RC2 as the official 3.0.0
> release.
>
> Konstantine
>
> On Wed, Sep 15, 2021 at 12:34 AM Israel Ekpo  wrote:
>
>> Hi Konstantine,
>>
>> Thanks for running the release
>>
>> I ran the following checks:
>>
>>- PGP Signatures used to sign the release artifacts
>>- Validation of Release Artifacts Cryptographic Hashes (ASC MD5 SHA1
>>SHA512)
>>- Validation of Kafka Source and Tests
>>- Validation of Kafka Site Documentation
>>- Manual Check of Javadocs
>>- Validation of Cluster Setup for KRaft and Legacy Modes
>>
>> *+1 from me (non-binding)*
>>
>> To encourage other community members to participate in the release
>> candidate validations and voting, I have set up the following resource as
>> part of the work for KAFKA-9861
>>
>> It is a set of scripts and Docker images that allows community members to
>> run local validations in a consistent manner.
>>
>> https://github.com/izzyacademy/apache-kafka-release-party
>>
>> Please take a look at the resource and share any feedback that you may
>> have.
>>
>> I plan to create a video tutorial that walks community members through how
>> it can be used soon. Stay tuned
>>
>>
>>
>>
>> On Wed, Sep 8, 2021 at 5:59 PM Konstantine Karantasis <
>> kkaranta...@apache.org> wrote:
>>
>> > Hello again Kafka users, developers and client-developers,
>> >
>> > This is the third candidate for release of Apache Kafka 3.0.0.
>> > It is a major release that includes many new features, including:
>> >
>> > * The deprecation of support for Java 8 and Scala 2.12.
>> > * Kafka Raft support for snapshots of the metadata topic and other
>> > improvements in the self-managed quorum.
>> > * Deprecation of message formats v0 and v1.
>> > * Stronger delivery guarantees for the Kafka producer enabled by
>> default.
>> > * Optimizations in OffsetFetch and FindCoordinator requests.
>> > * More flexible Mirror Maker 2 configuration and deprecation of Mirror
>> > Maker 1.
>> > * Ability to restart a connector's tasks on a single call in Kafka
>> Connect.
>> > * Connector log contexts and connector client overrides are now enabled
>> by
>> > default.
>> > * Enhanced semantics for timestamp synchronization in Kafka Streams.
>> > * Revamped public API for Stream's TaskId.
>> > * Default serde becomes null in Kafka Streams and several other
>> > configuration changes.
>> >
>> > You may read and review a more detailed list of changes in the 3.0.0
>> blog
>> > post draft here:
>> >
>> https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache6
>> >
>> > Release notes for the 3.0.0 release:
>> > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/RELEASE_NOTES.html
>> >
>> > *** Please download, test and vote by Tuesday, September 14, 2021 ***
>> >
>> > Kafka's KEYS file containing PGP keys we use to sign the release:
>> > https://kafka.apache.org/KEYS
>> >
>> > * Release artifacts to be voted upon (source and binary):
>> > https://home

Re: [VOTE] 3.0.0 RC2

2021-09-09 Thread Konstantine Karantasis
Hi Bill,

I just added folder 30 to the kafka-site repo. Hadn't realized that this
separate manual step was part of the RC process and not the official
release (even though, strangely enough, I was expecting myself to be able
to read the docs online). I guess I needed a second nudge after Gary's
first comment on RC1 to see what was missing. I'll update the release doc
to make this more clear.

Should be accessible now. Please take another look.

Konstantine



On Fri, Sep 10, 2021 at 12:50 AM Bill Bejeck  wrote:

> Hi Konstantine,
>
> I've started to do the validation for the release and the link for docs
> doesn't work.
>
> Thanks,
> Bill
>
> On Wed, Sep 8, 2021 at 5:59 PM Konstantine Karantasis <
> kkaranta...@apache.org> wrote:
>
> > Hello again Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 3.0.0.
> > It is a major release that includes many new features, including:
> >
> > * The deprecation of support for Java 8 and Scala 2.12.
> > * Kafka Raft support for snapshots of the metadata topic and other
> > improvements in the self-managed quorum.
> > * Deprecation of message formats v0 and v1.
> > * Stronger delivery guarantees for the Kafka producer enabled by default.
> > * Optimizations in OffsetFetch and FindCoordinator requests.
> > * More flexible Mirror Maker 2 configuration and deprecation of Mirror
> > Maker 1.
> > * Ability to restart a connector's tasks on a single call in Kafka
> Connect.
> > * Connector log contexts and connector client overrides are now enabled
> by
> > default.
> > * Enhanced semantics for timestamp synchronization in Kafka Streams.
> > * Revamped public API for Stream's TaskId.
> > * Default serde becomes null in Kafka Streams and several other
> > configuration changes.
> >
> > You may read and review a more detailed list of changes in the 3.0.0 blog
> > post draft here:
> >
> https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache6
> >
> > Release notes for the 3.0.0 release:
> > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Tuesday, September 14, 2021 ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/javadoc/
> >
> > * Tag to be voted upon (off 3.0 branch) is the 3.0.0 tag:
> > https://github.com/apache/kafka/releases/tag/3.0.0-rc2
> >
> > * Documentation:
> > https://kafka.apache.org/30/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/30/protocol.html
> >
> > * Successful Jenkins builds for the 3.0 branch:
> > Unit/integration tests:
> >
> >
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka/detail/3.0/129/
> > (1 flaky test failure)
> > System tests:
> > https://jenkins.confluent.io/job/system-test-kafka/job/3.0/67/
> > (1 flaky test failure)
> >
> > /**
> >
> > Thanks,
> > Konstantine
> >
>


Re: [VOTE] 3.0.0 RC1

2021-09-08 Thread Konstantine Karantasis
In light of the recent blockers that were reported in the past few days I'm
now closing RC1.
These blockers have now been resolved and RC2 is almost ready, so I will
start a new thread for this new release candidate.

Many thanks to everyone who tested RC1.

Konstantine

On Tue, Aug 31, 2021 at 6:34 PM Konstantine Karantasis <
kkaranta...@apache.org> wrote:

>
> Hello Kafka users, developers and client-developers,
>
> This is the second release candidate for Apache Kafka 3.0.0.
> It corresponds to a major release that includes many new features,
> including:
>
> * The deprecation of support for Java 8 and Scala 2.12.
> * Kafka Raft support for snapshots of the metadata topic and
> other improvements in the self-managed quorum.
> * Deprecation of message formats v0 and v1.
> * Stronger delivery guarantees for the Kafka producer enabled by default.
> * Optimizations in OffsetFetch and FindCoordinator requests.
> * More flexible Mirror Maker 2 configuration and deprecation of
> Mirror Maker 1.
> * Ability to restart a connector's tasks on a single call in Kafka Connect.
> * Connector log contexts and connector client overrides are now enabled
> by default.
> * Enhanced semantics for timestamp synchronization in Kafka Streams.
> * Revamped public API for Stream's TaskId.
> * Default serde becomes null in Kafka Streams and several
> other configuration changes.
>
> You may read and review a more detailed list of changes in the 3.0.0 blog
> post draft here:
>
> https://blogs.apache.org/roller-ui/authoring/preview/kafka/?previewEntry=what-s-new-in-apache6
>
> Release notes for the 3.0.0 release:
> https://home.apache.org/~kkarantasis/kafka-3.0.0-rc1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Wednesday, September 8, 2021 ***
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~kkarantasis/kafka-3.0.0-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~kkarantasis/kafka-3.0.0-rc1/javadoc/
>
> * Tag to be voted upon (off 3.0 branch) is the 3.0.0 tag:
> https://github.com/apache/kafka/releases/tag/3.0.0-rc1
>
> * Documentation:
> https://kafka.apache.org/30/documentation.html
>
> * Protocol:
> https://kafka.apache.org/30/protocol.html
>
> * Successful Jenkins builds for the 3.0 branch:
> Unit/integration tests:
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka/detail/3.0/121/pipeline/
> (only few flaky failures)
> System tests:
> https://jenkins.confluent.io/job/system-test-kafka/job/3.0/57/
>
> /**
>
> Thanks,
> Konstantine
>


[VOTE] 3.0.0 RC2

2021-09-08 Thread Konstantine Karantasis
Hello again Kafka users, developers and client-developers,

This is the third candidate for release of Apache Kafka 3.0.0.
It is a major release that includes many new features, including:

* The deprecation of support for Java 8 and Scala 2.12.
* Kafka Raft support for snapshots of the metadata topic and other
improvements in the self-managed quorum.
* Deprecation of message formats v0 and v1.
* Stronger delivery guarantees for the Kafka producer enabled by default.
* Optimizations in OffsetFetch and FindCoordinator requests.
* More flexible Mirror Maker 2 configuration and deprecation of Mirror
Maker 1.
* Ability to restart a connector's tasks on a single call in Kafka Connect.
* Connector log contexts and connector client overrides are now enabled by
default.
* Enhanced semantics for timestamp synchronization in Kafka Streams.
* Revamped public API for Stream's TaskId.
* Default serde becomes null in Kafka Streams and several other
configuration changes.

You may read and review a more detailed list of changes in the 3.0.0 blog
post draft here:
https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache6

Release notes for the 3.0.0 release:
https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/RELEASE_NOTES.html

*** Please download, test and vote by Tuesday, September 14, 2021 ***

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/javadoc/

* Tag to be voted upon (off 3.0 branch) is the 3.0.0 tag:
https://github.com/apache/kafka/releases/tag/3.0.0-rc2

* Documentation:
https://kafka.apache.org/30/documentation.html

* Protocol:
https://kafka.apache.org/30/protocol.html

* Successful Jenkins builds for the 3.0 branch:
Unit/integration tests:
https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka/detail/3.0/129/
(1 flaky test failure)
System tests: https://jenkins.confluent.io/job/system-test-kafka/job/3.0/67/
(1 flaky test failure)

/**

Thanks,
Konstantine


Re: [VOTE] 3.0.0 RC1

2021-08-31 Thread Konstantine Karantasis
Small correction to my previous email.
The actual link for public preview of the 3.0.0 blog post draft is:

https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache6

(see also the email thread with title: [DISCUSS] Please review the 3.0.0
blog post)

Best,
Konstantine

On Tue, Aug 31, 2021 at 6:34 PM Konstantine Karantasis <
kkaranta...@apache.org> wrote:

>
> Hello Kafka users, developers and client-developers,
>
> This is the second release candidate for Apache Kafka 3.0.0.
> It corresponds to a major release that includes many new features,
> including:
>
> * The deprecation of support for Java 8 and Scala 2.12.
> * Kafka Raft support for snapshots of the metadata topic and
> other improvements in the self-managed quorum.
> * Deprecation of message formats v0 and v1.
> * Stronger delivery guarantees for the Kafka producer enabled by default.
> * Optimizations in OffsetFetch and FindCoordinator requests.
> * More flexible Mirror Maker 2 configuration and deprecation of
> Mirror Maker 1.
> * Ability to restart a connector's tasks on a single call in Kafka Connect.
> * Connector log contexts and connector client overrides are now enabled
> by default.
> * Enhanced semantics for timestamp synchronization in Kafka Streams.
> * Revamped public API for Stream's TaskId.
> * Default serde becomes null in Kafka Streams and several
> other configuration changes.
>
> You may read and review a more detailed list of changes in the 3.0.0 blog
> post draft here:
>
> https://blogs.apache.org/roller-ui/authoring/preview/kafka/?previewEntry=what-s-new-in-apache6
>
> Release notes for the 3.0.0 release:
> https://home.apache.org/~kkarantasis/kafka-3.0.0-rc1/RELEASE_NOTES.html
>
> *** Please download, test and vote by Wednesday, September 8, 2021 ***
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~kkarantasis/kafka-3.0.0-rc1/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~kkarantasis/kafka-3.0.0-rc1/javadoc/
>
> * Tag to be voted upon (off 3.0 branch) is the 3.0.0 tag:
> https://github.com/apache/kafka/releases/tag/3.0.0-rc1
>
> * Documentation:
> https://kafka.apache.org/30/documentation.html
>
> * Protocol:
> https://kafka.apache.org/30/protocol.html
>
> * Successful Jenkins builds for the 3.0 branch:
> Unit/integration tests:
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka/detail/3.0/121/pipeline/
> (only few flaky failures)
> System tests:
> https://jenkins.confluent.io/job/system-test-kafka/job/3.0/57/
>
> /**
>
> Thanks,
> Konstantine
>


[VOTE] 3.0.0 RC1

2021-08-31 Thread Konstantine Karantasis
Hello Kafka users, developers and client-developers,

This is the second release candidate for Apache Kafka 3.0.0.
It corresponds to a major release that includes many new features,
including:

* The deprecation of support for Java 8 and Scala 2.12.
* Kafka Raft support for snapshots of the metadata topic and
other improvements in the self-managed quorum.
* Deprecation of message formats v0 and v1.
* Stronger delivery guarantees for the Kafka producer enabled by default.
* Optimizations in OffsetFetch and FindCoordinator requests.
* More flexible Mirror Maker 2 configuration and deprecation of
Mirror Maker 1.
* Ability to restart a connector's tasks on a single call in Kafka Connect.
* Connector log contexts and connector client overrides are now enabled
by default.
* Enhanced semantics for timestamp synchronization in Kafka Streams.
* Revamped public API for Stream's TaskId.
* Default serde becomes null in Kafka Streams and several
other configuration changes.

You may read and review a more detailed list of changes in the 3.0.0 blog
post draft here:
https://blogs.apache.org/roller-ui/authoring/preview/kafka/?previewEntry=what-s-new-in-apache6

Release notes for the 3.0.0 release:
https://home.apache.org/~kkarantasis/kafka-3.0.0-rc1/RELEASE_NOTES.html

*** Please download, test and vote by Wednesday, September 8, 2021 ***

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~kkarantasis/kafka-3.0.0-rc1/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~kkarantasis/kafka-3.0.0-rc1/javadoc/

* Tag to be voted upon (off 3.0 branch) is the 3.0.0 tag:
https://github.com/apache/kafka/releases/tag/3.0.0-rc1

* Documentation:
https://kafka.apache.org/30/documentation.html

* Protocol:
https://kafka.apache.org/30/protocol.html

* Successful Jenkins builds for the 3.0 branch:
Unit/integration tests:
https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka/detail/3.0/121/pipeline/
(only few flaky failures)
System tests: https://jenkins.confluent.io/job/system-test-kafka/job/3.0/57/

/**

Thanks,
Konstantine


Re: [ANNOUNCE] New Kafka PMC Member: Randall Hauch

2021-04-19 Thread Konstantine Karantasis
Congratulations Randall!

Konstantine

On Mon, Apr 19, 2021 at 1:14 AM Bruno Cadonna  wrote:

> Congrats Randall! Well deserved!
>
> Bruno
>
> On 17.04.21 01:43, Matthias J. Sax wrote:
> > Hi,
> >
> > It's my pleasure to announce that Randall Hauch in now a member of the
> > Kafka PMC.
> >
> > Randall has been a Kafka committer since Feb 2019. He has remained
> > active in the community since becoming a committer.
> >
> >
> >
> > Congratulations Randall!
> >
> >   -Matthias, on behalf of Apache Kafka PMC
> >
>


Re: [ANNOUNCE] New Kafka PMC Member: Bill Bejeck

2021-04-08 Thread Konstantine Karantasis
Congratulations Bill!

Konstantine

On Thu, Apr 8, 2021 at 2:42 AM Mickael Maison 
wrote:

> Congratulations Bill!
>
> On Thu, Apr 8, 2021 at 10:06 AM David Jacot 
> wrote:
> >
> > Congrats, Bill!
> >
> > On Thu, Apr 8, 2021 at 9:54 AM Tom Bentley  wrote:
> >
> > > Congratulations Bill!
> > >
> > > On Thu, Apr 8, 2021 at 2:36 AM Luke Chen  wrote:
> > >
> > > > Congratulations Bill!
> > > >
> > > > Luke
> > > >
> > > > On Thu, Apr 8, 2021 at 9:17 AM Matthias J. Sax 
> wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > It's my pleasure to announce that Bill Bejeck in now a member of
> the
> > > > > Kafka PMC.
> > > > >
> > > > > Bill has been a Kafka committer since Feb 2019. He has remained
> > > > > active in the community since becoming a committer.
> > > > >
> > > > >
> > > > >
> > > > > Congratulations Bill!
> > > > >
> > > > >  -Matthias, on behalf of Apache Kafka PMC
> > > > >
> > > >
> > >
>


Re: [ANNOUNCE] New Committer: Bruno Cadonna

2021-04-07 Thread Konstantine Karantasis
Congratulations Bruno!

Konstantine

On Wed, Apr 7, 2021 at 8:08 PM Sophie Blee-Goldman
 wrote:

> Congrats!
>
> On Wed, Apr 7, 2021 at 6:32 PM Luke Chen  wrote:
>
> > Congrats Bruno!!
> >
> > Luke
> >
> > On Thu, Apr 8, 2021 at 9:18 AM Matthias J. Sax  wrote:
> >
> > > Congrats Bruno! Very well deserved!
> > >
> > >
> > > -Matthias
> > >
> > > On 4/7/21 3:51 PM, Bill Bejeck wrote:
> > > > Congrats Bruno! Well deserved.
> > > >
> > > > Bill
> > > >
> > > > On Wed, Apr 7, 2021 at 6:34 PM Guozhang Wang 
> > wrote:
> > > >
> > > >> Hello all,
> > > >>
> > > >> I'm happy to announce that Bruno Cadonna has accepted his invitation
> > to
> > > >> become an Apache Kafka committer.
> > > >>
> > > >> Bruno has been contributing to Kafka since Jan. 2019 and has made 99
> > > >> commits and more than 80 PR reviews so far:
> > > >>
> > > >> https://github.com/apache/kafka/commits?author=cadonna
> > > >>
> > > >> He worked on a few key KIPs on Kafka Streams:
> > > >>
> > > >> * KIP-471: Expose RocksDB Metrics in Kafka Streams
> > > >> * KIP-607: Add Metrics to Kafka Streams to Report Properties of
> > RocksDB
> > > >> * KIP-662: Throw Exception when Source Topics of a Streams App are
> > > Deleted
> > > >>
> > > >> Besides all the code contributions and reviews, he's also done a
> > handful
> > > >> for the community: multiple Kafka meetup talks in Berlin and Kafka
> > > Summit
> > > >> talks, an introductory class to Kafka at Humboldt-Universität zu
> > Berlin
> > > >> seminars, and have co-authored a paper on Kafka's stream processing
> > > >> semantics in this year's SIGMOD conference (
> > > >> https://en.wikipedia.org/wiki/SIGMOD). Bruno has also been quite
> > > active on
> > > >> SO channels and AK mailings.
> > > >>
> > > >> Please join me to congratulate Bruno for all the contributions!
> > > >>
> > > >> -- Guozhang
> > > >>
> > > >
> > >
> >
>


Re: [ANNOUNCE] New committer: Colin McCabe

2018-09-26 Thread Konstantine Karantasis
Well deserved! Congratulations Colin.

-Konstantine

On Wed, Sep 26, 2018 at 4:57 AM Srinivas Reddy 
wrote:

> Congratulations Colin 👏
>
> -
> Srinivas
>
> - Typed on tiny keys. pls ignore typos.{mobile app}
>
> On Tue 25 Sep, 2018, 16:39 Ismael Juma,  wrote:
>
> > Hi all,
> >
> > The PMC for Apache Kafka has invited Colin McCabe as a committer and we
> are
> > pleased to announce that he has accepted!
> >
> > Colin has contributed 101 commits and 8 KIPs including significant
> > improvements to replication, clients, code quality and testing. A few
> > highlights were KIP-97 (Improved Clients Compatibility Policy), KIP-117
> > (AdminClient), KIP-227 (Incremental FetchRequests to Increase Partition
> > Scalability), the introduction of findBugs and adding Trogdor (fault
> > injection and benchmarking tool).
> >
> > In addition, Colin has reviewed 38 pull requests and participated in more
> > than 50 KIP discussions.
> >
> > Thank you for your contributions Colin! Looking forward to many more. :)
> >
> > Ismael, for the Apache Kafka PMC
> >
>


Re: [ANNOUNCE] New Kafka PMC Member: Rajini Sivaram

2018-01-17 Thread Konstantine Karantasis
Congrats Rajini!

-Konstantine

On Wed, Jan 17, 2018 at 2:18 PM, Becket Qin  wrote:

> Congratulations, Rajini!
>
> On Wed, Jan 17, 2018 at 1:52 PM, Ismael Juma  wrote:
>
> > Congratulations Rajini!
> >
> > On 17 Jan 2018 10:49 am, "Gwen Shapira"  wrote:
> >
> > Dear Kafka Developers, Users and Fans,
> >
> > Rajini Sivaram became a committer in April 2017.  Since then, she
> remained
> > active in the community and contributed major patches, reviews and KIP
> > discussions. I am glad to announce that Rajini is now a member of the
> > Apache Kafka PMC.
> >
> > Congratulations, Rajini and looking forward to your future contributions.
> >
> > Gwen, on behalf of Apache Kafka PMC
> >
>


Re: Connectors Information from Microsoft SQL Server to Kafka

2018-01-15 Thread Konstantine Karantasis
You might find this connector useful for your use case:

https://github.com/jcustenborder/kafka-connect-cdc-mssql

Konstantine

On Tue, Dec 12, 2017 at 9:29 PM, harish reddy m 
wrote:

> Hi Team,
>
> We have a requirement of replicating the data from MSSQL Source database to
> MSSQL target database.
>
> We are using SQL Server 2014 Web edition as the source database and target
> database as well. We want to replicate the data from Source database to
> target database realtime.
>
> Enabled the change tracking on the SQL Server Database side as change data
> capture is not supported in web edition of SQL Server 2014.
>
> So we are using Kafka Connect to read the changes (updates/inserts) from
> the source database and then push them to target database in realtime.
>
> Could you help us in getting if any Kafka connectors for MSSQL to Kafka and
> Kafka to MSSQL  are available for the above scenario.
>
> We are open for any suggestions as well for the above requirement.
>
> Thanks
> Harish
>


Re: Kafka Producer HA - using Kafka Connect

2018-01-15 Thread Konstantine Karantasis
If I understand correctly, and your question refers to general fault
tolerance, the answer is yes, Kafka Connect offers fault tolerance in
distributed mode.

You may start several Connect workers and if a worker running one task with
your single producer fails unexpectedly, then this task will be restarted
in another worker and continue from where it was left off.

-Konstantine

On Thu, Nov 30, 2017 at 4:08 PM, sham singh 
wrote:

> We are looking at implementing Kafka Producer HA ..
>
> i.e there are 2 producers which can produce the same data ..
> The objective is to have High Availability implemented for the Kafka
> Producer ..
>
> i.e. if Producer1 goes down, the Producer2 kick starts and produces data
> starting from the offset committed by the Producer1
>
> Would using Kafka Connect help in this scenario ?
> or a custom solution would have to be built ?
>
> Appreciate your response on this.
>


Re: 1 to N transformers in Kafka Connect

2018-01-15 Thread Konstantine Karantasis
Indeed, there is no flattening operator in Kafka Connect's SMTs at the
moment. The 'apply' method in the Transformation interface accepts a single
record and returns another - transformed - record or null.

Konstantine.

On Wed, Dec 27, 2017 at 8:25 PM, Ziliang Chen  wrote:

> Hi,
>
> May i ask if it is possible to do 1 kafka record to many Kafka Connect
> records transformation ?
> I know we have 1:1 transformation supported in Kafka Connect, but it
> appears to me there are quite some user cases which requires 1:N
> transformation
>
> Thank you very much !
>
> --
> Regards, Zi-Liang
>
> Mail:zlchen@gmail.com
>


Re: Kafka Connect

2017-10-30 Thread Konstantine Karantasis
Have you tried the HDFS connector here?
https://github.com/confluentinc/kafka-connect-hdfs
Its master and 4.0.x branches contain export support for JSON in '.json'
text files.

Konstantine.



On Sat, Oct 28, 2017 at 11:51 PM, Alexander Atanasov <
alexandaratana...@gmail.com> wrote:

> Hello,
>
> I am trying to consume JSON messages and save it to HDFS and ingest it to
> table in HIVE.
>
> I have no problems doing that with AVRO files but I have troubles with
> JSON.
>
> Is there any way to add schema to the JSON in the consumer or consume it
> without schema?
> Also I can do it with spark but I was looking for solution without using
> it.
>
> Thank you very much for your time,
> Aleksandar Atanasov
>


Re: Consumer poll returning 0 results

2017-10-25 Thread Konstantine Karantasis
Are you producing any records after you start the consumer?

By default, Kafka consumer starts with auto.offset.reset == latest (
https://kafka.apache.org/documentation/#newconsumerconfigs), which means
that if the consumer doesn't find a previous offset for its consumer group
(e.g. the first time the consumer runs) it will start consuming from the
latest offset and on. Therefore, if there are no new records produced to
this Kafka topic after the consumer is started (specifically after has a
partitions assigned to it), the consumer won't return anything.

To try the above snippet, you have two easy options:

1) Produce records after you start the consumer (e.g. with
kafka-console-producer)
2) Set auto.offset.reset to earliest for this consumer (e.g. in you code
above, props.put("auto.offset.reset", "earliest"); ). Test equivalent
behavior with a command such as:
bin/kafka-console-consumer --bootstrap-server localhost:9092 --topic test
--from-beginning

Omitting "--from-beginning" will be equivalent to what you observe above.

Konstantine

On Wed, Oct 25, 2017 at 6:48 PM, Ted Yu  wrote:

> Can you provide a bit more information ?
>
> Release of Kafka
> Java / Scala version
>
> Thanks
>
> On Wed, Oct 25, 2017 at 6:40 PM, Susheel Kumar 
> wrote:
>
> > Hello Kafka Users,
> >
> > I am trying to run below sample code mentioned in Kafka documentation
> under
> > Automatic Offset Committing for a topic with 1 partition  (tried with 3
> and
> > more partition as well). Create command as follows
> >
> > bin/kafka-topics.sh --create --zookeeper :2181 --replication-factor 3
> > --partitions 1 --topic test --config cleanup.policy=compact,delete
> >
> > but the sample code always returns 0 records unless I provide a custom
> > ConsumerRebalanceListener (below) which sets consumer to beginning.
> >
> > I wonder if the sample code given at Kafka documentation is wrong or am I
> > missing something?
> >
> > https://kafka.apache.org/0101/javadoc/index.html?org/apache/
> > kafka/clients/consumer/KafkaConsumer.html
> >
> >
> > *Automatic Offset Committing*
> >
> > This example demonstrates a simple usage of Kafka's consumer api that
> > relying on automatic offset committing.
> >
> >  Properties props = new Properties();
> >  props.put("bootstrap.servers", "localhost:9092");
> >  props.put("group.id", "test");
> >  props.put("enable.auto.commit", "true");
> >  props.put("auto.commit.interval.ms", "1000");
> >  props.put("key.deserializer",
> > "org.apache.kafka.common.serialization.StringDeserializer");
> >  props.put("value.deserializer",
> > "org.apache.kafka.common.serialization.StringDeserializer");
> >  KafkaConsumer consumer = new KafkaConsumer<>(props);
> >  consumer.subscribe(Arrays.asList("foo", "bar"));
> >  while (true) {
> >  ConsumerRecords records = consumer.poll(100);
> >  for (ConsumerRecord record : records)
> >  System.out.printf("offset = %d, key = %s, value = %s%n",
> > record.offset(), record.key(), record.value());
> >  }
> >
> >
> >
> > 
> >
> > public class SeekToBeginingConsumerRebalancerListener implements
> > org.apache.kafka.clients.consumer.ConsumerRebalanceListener {
> >
> >  private Consumer consumer;
> >  public SeekToBeginingConsumerRebalancerListener(KafkaConsumer<
> String,
> > String> consumer2) {
> >  this.consumer = consumer2;
> >  }
> >  public void onPartitionsRevoked(Collection
> > partitions) {
> >  for (TopicPartition partition : partitions) {
> >
> > //offsetManager.saveOffsetInExternalStore(partition.topic(),
> > partition.partition(),consumer.position(partition));
> >  }
> >  }
> >  public void onPartitionsAssigned(Collection
> > partitions) {
> > /* for (TopicPartition partition : partitions) {
> >  consumer.seek(partition,seekTo));
> >  }*/
> >  consumer.seekToBeginning(partitions);
> >  }
> > }
> >
>


Re: Way to check if custom SMT has been added to the classpath or even if it i working.

2017-08-14 Thread Konstantine Karantasis
Hi,

connector-plugins endpoint does not list the transformations classes
currently. However if you are using the latest Kafka version ( >= 0.11.0)
one way to see if your transform is discovered during startup in the given
classpath is to notice whether a log message such as the one below is
printed:

[2017-08-14 17:35:08,625] INFO Added plugin
'org.apache.kafka.connect.transforms.TimestampRouter'
(org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader:132)

With respect to debugging efforts around the places where transformations
are called, two such places are the methods:
WorkerSinkTask.convertMessages and WorkerSourceTask.sendRecords() depending
on whether your transformation is configured to be applied with a Sink or a
Source connector respectively.

Konstantine.

On Tue, Aug 8, 2017 at 12:49 PM, satyajit vegesna 
wrote:

> Hi All,
>
> i have created a custom SMT and have deployed.
> I would like to know if there is a way to check if the transform is working
> or not.(def not working as the messages are not getting transformed)
>
> I am also trying to remote debug using intellij and nothing seam working,
> as i do not see any control hitting the debug points.
>
> When i check the connector list using , curl localhost:8083/connector-
> plugins
> , i see all other connector plugins but not the SMT related ones.
>
> Regards.
>


Re: connect in 0.11.0.0 warnings due to class not found exceptions

2017-07-07 Thread Konstantine Karantasis
Good to hear!

gson is used by org.reflections

Cheers,
Konstantine


On Thu, Jul 6, 2017 at 10:03 PM, Koert Kuipers  wrote:

> i did not have log4j.logger.org.reflections=ERROR, because i didnt update
> my log4j files yet. i will do this now.
>
> connect seems to start up fine.
>
> i still wonder why its searching for gson. like... where does it get the
> idea for the start searching for gson? i dont use gson and neither does
> connect it seems?
>
> On Thu, Jul 6, 2017 at 8:09 PM, Konstantine Karantasis <
> konstant...@confluent.io> wrote:
>
> > Hi Koert,
> >
> > these warnings appear to be produced during the class scanning that
> Connect
> > is performing when it's starting up. Connect is using org.reflections to
> > discover plugins (Connectors, Transformations, Converters) in the various
> > locations that it's configured to search for plugins.
> > (such locations are entries in the plugin.path property as well as the
> > supplied CLASSPATH). It's normally safe to ignore the warnings.
> >
> > I would expect that such warnings would be disabled by having:
> >
> > log4j.logger.org.reflections=ERROR
> >
> > in config/connect-log4j.properties.
> >
> > Does this setting exist in your environment? Did you by any change
> enabled
> > a different log level for org.reflections?
> > Also, is Connect starting up successfully after all these warnings are
> > logged?
> >
> > Konstantine
> >
> >
> > On Thu, Jul 6, 2017 at 3:33 PM, Koert Kuipers  wrote:
> >
> > > i just did a test upgrade to kafka 0.11.0.0 and i am seeing lots of
> > > ClassNotFoundException in the logs for connect-distributed upon
> startup,
> > > see below. is this expected? kind of curious why its looking for say
> gson
> > > while gson jar is not in libs folder.
> > > best,
> > > koert
> > >
> > >
> > > [2017-07-06 22:20:41,844] INFO Reflections took 6944 ms to scan 65
> urls,
> > > producing 3136 keys and 25105 values  (org.reflections.Reflections:
> 232)
> > > [2017-07-06 22:20:42,126] WARN could not get type for name
> > > org.osgi.framework.BundleListener from any class loader
> > > (org.reflections.Reflections:396)
> > > org.reflections.ReflectionsException: could not get type for name
> > > org.osgi.framework.BundleListener
> > > at org.reflections.ReflectionUtils.forName(
> > > ReflectionUtils.java:390)
> > > at
> > > org.reflections.Reflections.expandSuperTypes(Reflections.java:381)
> > > at org.reflections.Reflections.(Reflections.java:126)
> > > at
> > > org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.
> > > scanPluginPath(DelegatingClassLoader.java:221)
> > > at
> > > org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.
> > > scanUrlsAndAddPlugins(DelegatingClassLoader.java:198)
> > > at
> > > org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.
> > > initLoaders(DelegatingClassLoader.java:159)
> > > at
> > > org.apache.kafka.connect.runtime.isolation.Plugins.<
> > init>(Plugins.java:47)
> > > at
> > > org.apache.kafka.connect.cli.ConnectDistributed.main(
> > > ConnectDistributed.java:63)
> > > Caused by: java.lang.ClassNotFoundException:
> > > org.osgi.framework.BundleListener
> > > at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> > > at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> > > at sun.misc.Launcher$AppClassLoader.loadClass(
> Launcher.java:331)
> > > at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> > > at org.reflections.ReflectionUtils.forName(
> > > ReflectionUtils.java:388)
> > > ... 7 more
> > > [2017-07-06 22:20:42,223] WARN could not get type for name
> > > com.google.gson.JsonDeserializer from any class loader
> > > (org.reflections.Reflections:396)
> > > org.reflections.ReflectionsException: could not get type for name
> > > com.google.gson.JsonDeserializer
> > > at org.reflections.ReflectionUtils.forName(
> > > ReflectionUtils.java:390)
> > > at
> > > org.reflections.Reflections.expandSuperTypes(Reflections.java:381)
> > > at org.reflections.Reflections.(Reflections.java:126)
> > > at
> > > org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.
&g

Re: connect in 0.11.0.0 warnings due to class not found exceptions

2017-07-06 Thread Konstantine Karantasis
Hi Koert,

these warnings appear to be produced during the class scanning that Connect
is performing when it's starting up. Connect is using org.reflections to
discover plugins (Connectors, Transformations, Converters) in the various
locations that it's configured to search for plugins.
(such locations are entries in the plugin.path property as well as the
supplied CLASSPATH). It's normally safe to ignore the warnings.

I would expect that such warnings would be disabled by having:

log4j.logger.org.reflections=ERROR

in config/connect-log4j.properties.

Does this setting exist in your environment? Did you by any change enabled
a different log level for org.reflections?
Also, is Connect starting up successfully after all these warnings are
logged?

Konstantine


On Thu, Jul 6, 2017 at 3:33 PM, Koert Kuipers  wrote:

> i just did a test upgrade to kafka 0.11.0.0 and i am seeing lots of
> ClassNotFoundException in the logs for connect-distributed upon startup,
> see below. is this expected? kind of curious why its looking for say gson
> while gson jar is not in libs folder.
> best,
> koert
>
>
> [2017-07-06 22:20:41,844] INFO Reflections took 6944 ms to scan 65 urls,
> producing 3136 keys and 25105 values  (org.reflections.Reflections:232)
> [2017-07-06 22:20:42,126] WARN could not get type for name
> org.osgi.framework.BundleListener from any class loader
> (org.reflections.Reflections:396)
> org.reflections.ReflectionsException: could not get type for name
> org.osgi.framework.BundleListener
> at org.reflections.ReflectionUtils.forName(
> ReflectionUtils.java:390)
> at
> org.reflections.Reflections.expandSuperTypes(Reflections.java:381)
> at org.reflections.Reflections.(Reflections.java:126)
> at
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.
> scanPluginPath(DelegatingClassLoader.java:221)
> at
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.
> scanUrlsAndAddPlugins(DelegatingClassLoader.java:198)
> at
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.
> initLoaders(DelegatingClassLoader.java:159)
> at
> org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:47)
> at
> org.apache.kafka.connect.cli.ConnectDistributed.main(
> ConnectDistributed.java:63)
> Caused by: java.lang.ClassNotFoundException:
> org.osgi.framework.BundleListener
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at org.reflections.ReflectionUtils.forName(
> ReflectionUtils.java:388)
> ... 7 more
> [2017-07-06 22:20:42,223] WARN could not get type for name
> com.google.gson.JsonDeserializer from any class loader
> (org.reflections.Reflections:396)
> org.reflections.ReflectionsException: could not get type for name
> com.google.gson.JsonDeserializer
> at org.reflections.ReflectionUtils.forName(
> ReflectionUtils.java:390)
> at
> org.reflections.Reflections.expandSuperTypes(Reflections.java:381)
> at org.reflections.Reflections.(Reflections.java:126)
> at
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.
> scanPluginPath(DelegatingClassLoader.java:221)
> at
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.
> scanUrlsAndAddPlugins(DelegatingClassLoader.java:198)
> at
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader.
> initLoaders(DelegatingClassLoader.java:159)
> at
> org.apache.kafka.connect.runtime.isolation.Plugins.(Plugins.java:47)
> at
> org.apache.kafka.connect.cli.ConnectDistributed.main(
> ConnectDistributed.java:63)
> Caused by: java.lang.ClassNotFoundException:
> com.google.gson.JsonDeserializer
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
> at org.reflections.ReflectionUtils.forName(
> ReflectionUtils.java:388)
> ... 7 more
>


Re: Kafka Connect: How to set log level for connectors?

2017-01-17 Thread Konstantine Karantasis
Class loading isolation is a known requested feature and we have plans to
add it in one of the forthcoming releases.

Re: the appenders, we should be seeing duplicate messages if there was an
issue there, but I'll double check.

Glad it worked after all.

Regards,
Konstantine

On Tue, Jan 17, 2017 at 4:34 PM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Related to what I discussed below, could there be a bug?
>
> For example, this line (for kafka):
> https://github.com/confluentinc/cp-docker-images/blob/master/debian/kafka/
> include/etc/confluent/docker/log4j.properties.template#L25
>
> looks different from this line (appends ,stdout) is that expected?
> https://github.com/confluentinc/cp-docker-images/blob/master/debian/kafka-
> connect-base/include/etc/confluent/docker/log4j.properties.template#L11
>
>
> Anyway, I figured out my issue… the connector I had created was using
> logback and scala logging. Somehow when the classes are loaded everything
> goes to crap and connect-log4j.properties is completely ignore.
> This should be set somewhere as a disclaimer. It’s been driving me crazy.
> I think it also comes from the risk that all the jars are loaded in the
> same JVM. Could that introduce version conflicts?
>
> Regards,
> Stephane
>
> On 18 January 2017 at 9:35:42 am, Stephane Maarek (
> steph...@simplemachines.com.au) wrote:
>
> Hi Konstantine,
>
> I appreciate you taking the time to respond
> So I have set CONNECT_LOG4J_ROOT_LEVEL=INFO and that’s the output I got
> below
> Now I understand I need to set CONNECT_LOG4J_LOGGERS also. Can I please
> have an example of how to set that value to suppress some debug statements?
>
> For Example, I tried CONNECT_LOG4_LOGGERS="org.
> reflections=INFO,org.apache.kafka=INFO” and yet I’m still seeing all the
> DEBUG statements… like
> 22:34:06.444 [CLASSPATH traversal thread.] DEBUG
> org.reflections.Reflections - could not scan file 
> unit/kafka/producer/ProducerTest.scala
> in url 
> file:/usr/bin/../share/java/kafka/kafka_2.11-0.10.1.0-cp2-test-sources.jar
> with scanner SubTypesScanner
>
> And it seems the bootstrap did take the variables into account as they get
> written successfully:
> root@6da04b77c18e:/# cat /etc/kafka/connect-log4j.properties
>
> log4j.rootLogger=INFO, stdout
>
> log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
>
> log4j.logger.org.reflections=INFO, stdout
> log4j.logger.org.apache.kafka=INFO, stdout
>
> I’m new to LOG4J properties so thanks for your help.
>
> Regards,
> Stephane
>
>
> On 18 January 2017 at 8:06:16 am, Konstantine Karantasis (
> konstant...@confluent.io) wrote:
>
> Hi Stephane,
>
> if you are using the docker images from confluent, a way to set the levels
> to specific loggers is described here:
>
> http://docs.confluent.io/3.1.1/cp-docker-images/docs/
> operations/logging.html#log4j-log-levels
>
> For Connect, you would need to set the environment variable
> CONNECT_LOG4J_LOGGERS in a similar way that KAFKA_LOG4J_LOGGERS is set in
> the "docker run" command described above.
>
> Regarding the redirection to stdout, if you are using Docker this is not
> configurable with the current templates because this allows you to view the
> logs for each container directly through docker via the command "docker
> logs ", which is the preferred way.
>
> Hope this helps,
> Konstantine
>
>
> On Mon, Jan 16, 2017 at 9:51 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
> > The kind of output is the following:
> >
> > 05:15:34.878 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name connections-closed:
> > 05:15:34.879 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name connections-created:
> > 05:15:34.880 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name bytes-sent-received:
> > 05:15:34.881 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name bytes-sent:
> > 05:15:34.882 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name bytes-received:
> > 05:15:34.882 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name select-time:
> > 05:15:34.884 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name io-time:
> > 05:15:34.905 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sen

Re: Kafka Connect: How to set log level for connectors?

2017-01-17 Thread Konstantine Karantasis
Hi Stephane,

if you are using the docker images from confluent, a way to set the levels
to specific loggers is described here:

http://docs.confluent.io/3.1.1/cp-docker-images/docs/operations/logging.html#log4j-log-levels

For Connect, you would need to set the environment variable
CONNECT_LOG4J_LOGGERS in a similar way that KAFKA_LOG4J_LOGGERS is set in
the "docker run" command described above.

Regarding the redirection to stdout, if you are using Docker this is not
configurable with the current templates because this allows you to view the
logs for each container directly through docker via the command "docker
logs ", which is the preferred way.

Hope this helps,
Konstantine


On Mon, Jan 16, 2017 at 9:51 PM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> The kind of output is the following:
>
> 05:15:34.878 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added
> sensor with name connections-closed:
> 05:15:34.879 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added
> sensor with name connections-created:
> 05:15:34.880 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added
> sensor with name bytes-sent-received:
> 05:15:34.881 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added
> sensor with name bytes-sent:
> 05:15:34.882 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added
> sensor with name bytes-received:
> 05:15:34.882 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added
> sensor with name select-time:
> 05:15:34.884 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added
> sensor with name io-time:
> 05:15:34.905 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added
> sensor with name heartbeat-latency
> 05:15:34.906 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added
> sensor with name join-latency
> 05:15:34.907 [main] DEBUG org.apache.kafka.common.metrics.Metrics - Added
> sensor with name sync-latency
> 05:15:34.970 [DistributedHerder] DEBUG
> org.apache.kafka.common.metrics.Metrics - Added sensor with name
> connections-closed:
> 05:15:34.971 [DistributedHerder] DEBUG
> org.apache.kafka.common.metrics.Metrics - Added sensor with name
> connections-created:
> 05:15:34.971 [DistributedHerder] DEBUG
> org.apache.kafka.common.metrics.Metrics - Added sensor with name
> bytes-sent-received:
> 05:15:34.972 [DistributedHerder] DEBUG
> org.apache.kafka.common.metrics.Metrics - Added sensor with name
> bytes-sent:
> 05:15:34.975 [DistributedHerder] DEBUG
> org.apache.kafka.common.metrics.Metrics - Added sensor with name
> bytes-received:
> 05:15:34.977 [DistributedHerder] DEBUG
> org.apache.kafka.common.metrics.Metrics - Added sensor with name
> select-time:
> 05:15:35.990 [DistributedHerder] DEBUG
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> kafka-connect-main has no committed offset for partition
> _connect_offsets-39
> 05:15:35.990 [DistributedHerder] DEBUG
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> kafka-connect-main has no committed offset for partition _connect_offsets-6
> 05:15:35.990 [DistributedHerder] DEBUG
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> kafka-connect-main has no committed offset for partition
> _connect_offsets-35
> 05:15:35.990 [DistributedHerder] DEBUG
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> kafka-connect-main has no committed offset for partition _connect_offsets-2
> 05:15:35.990 [DistributedHerder] DEBUG
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> kafka-connect-main has no committed offset for partition
> _connect_offsets-31
> 05:15:35.990 [DistributedHerder] DEBUG
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> kafka-connect-main has no committed offset for partition
> _connect_offsets-26
> 05:15:35.990 [DistributedHerder] DEBUG
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> kafka-connect-main has no committed offset for partition
> _connect_offsets-22
> 05:15:35.991 [DistributedHerder] DEBUG
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> kafka-connect-main has no committed offset for partition
> _connect_offsets-18
> 05:46:58.401 [CLASSPATH traversal thread.] DEBUG
> org.reflections.Reflections - could not scan file
> groovy/ui/icons/page_copy.png in url
> file:/usr/share/java/kafka-connect-hdfs/groovy-all-2.1.6.jar with scanner
> TypeAnnotationsScanner
> 05:46:58.401 [CLASSPATH traversal thread.] DEBUG
> org.reflections.Reflections - could not scan file
> groovy/ui/icons/page_copy.png in url
> file:/usr/share/java/kafka-connect-hdfs/groovy-all-2.1.6.jar with scanner
> SubTypesScanner
>
>
> *How do I stop all these loggers?*
>
> That’s what my connect-log4j.properties looks like:
>
>
> log4j.rootLogger=INFO, stdout
>
> log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> log4j.appen

Re: Debugging Kafka Connect connector in a IDE

2017-01-14 Thread Konstantine Karantasis
Hi,

Still, the simplest way to do what you are asking for is to attach a remote
debugger (e.g. remote configuration in IntelliJ).
However to debug your Connector from the very start, you'll need to set in
addition the following two environment variables:

export KAFKA_DEBUG=y; export DEBUG_SUSPEND_FLAG=y;

This will "freeze" the Connect process until you attach your debugger from
your IDE.

Cheers,
Konstantine

On Sat, Jan 14, 2017 at 1:26 PM, Paolo Patierno  wrote:

> Hi all,
>
> what is the best way or best practice for debugging a connector developed
> for Kafka Connect inside an IDE like IntelliJ or Eclipse ?
>
> Of course I can start Kafka Connect and the connector from the provided
> script and then attach a remote debugger but I'd like to debug from the
> connector creation and configuration.
>
> Thanks,
> Paolo
>


Re: Creating a connector with Kafka Connect Distributed returning 500 error

2016-12-07 Thread Konstantine Karantasis
The bug I was referring to was only in trunk for just a while. Thus, your
issue must be related to something else, even though the response statuses
are similar.

Let me know if you want to share a bigger and more detailed (DEBUG level at
least) snapshot of the parts of the logs that might be related to this
failure.

Cheers,
Konstantine

On Wed, Dec 7, 2016 at 11:15 AM, Phillip Mann  wrote:

> Hello Konstantine,
>
>
>
> Thanks for your reply.
>
>
>
> I am using Confluent 3.0.1 installed on my machine and our cluster.
> However, our AWS cluster has Confluent 3.1.1 installed so I will test with
> 3.1.1 client and cluster and see if this resolves the issue.  Additionally,
> I’ll use the debug levels if this does not resolve my issue.
>
>
>
> If not, I’ll explore the trunk repo but I would prefer to use stable
> versions of CP / Kafka that can be accessed with Maven.
>
>
>
> Thanks again.
>
>
>
> Phillip
>
>
>
> > Hi Phillip,
>
> >
>
> > may I ask which Kafka version did you use?
>
> >
>
> > trunk repo in Apache Kafka contained briefly a bug in Connect framework
>
> > (during the past week) that produced failures similar to the one you
>
> > describe (only in distributed mode). A fix has been pushed since
> yesterday.
>
> >
>
> > 3) Some useful step-by-step information is provided in the quickstart
> guide
>
> > here:
>
> > https://kafka.apache.org/quickstart#quickstart_kafkaconnect
>
> >
>
> > as well as in the documentation of Confluent:
>
> > http://docs.confluent.io/3.1.0/connect/quickstart.html#
>
> >
>
> > Alternatively, you might want to follow the quickstart guide of one of
> the
>
> > open source connectors, here:
>
> > http://docs.confluent.io/3.1.0/connect/connectors.html
>
> >
>
> > 2) From what you mention above, it seems more like that you're hitting
> this
>
> > temporary bug. But again that depends on which Kafka version you've been
>
> > using.
>
> >
>
> > 1) Generating logs, in one of the debug levels (e.g. DEBUG, TRACE) is
>
> > usually a useful source of information.
>
> > Alternatively you may chose to run Connect in debug mode by setting the
>
> > environment variable KAFKA_DEBUG and attaching a remote debugger to it
>
> > (such as IntelliJ's remote debugging capability). With respect to live
>
> > debugging, we are planning to post a step-by-step guide for Kafka and
> Kafka
>
> > Connect soon.
>
> >
>
> > Regards,
>
> > Konstantine
>
> >
>
> >> On Tue, Dec 6, 2016 at 11:22 AM, Phillip Mann  wrote:
>
> >>
>
> >> I am working on migrating from Camus to Kafka Connect. I am working on
> the
>
> >> implementation of Kafka Connect and specifically focused on distributed
>
> >> mode. I am able to start a worker successfully on my local machine
> which I
>
> >> assume communicates with my Kafka cluster. I am further able to run two
> GET
>
> >> commands such as / and /connector-plugins which return the correct JSON.
>
> >> However, when I try to POST a command to create a connector, I receive a
>
> >> 500 error and a time out. Specifically, I use this command to POST for
>
> >> testing:
>
> >>
>
> >> curl -X POST -H "Content-Type: application/json" --data '{"name":
>
> >> "local-file-sink", "config": {"connector.class":"
> FileStreamSinkConnector",
>
> >> "tasks.max":"1", "file":"test.sink.txt", "topics":"myTopic" }}'
>
> >> localhost:8083/connectors
>
> >>
>
> >> and eventually I get this response:
>
> >>
>
> >> {"error_code": 500, "message": "Request timed out"}
>
> >>
>
> >> I am lost as to what is going on. The logs from my Kafka Connect
>
> >> distributed worker show this:
>
> >>
>
> >> [2016-12-05 14:34:32,436] INFO 0:0:0:0:0:0:0:1 - - [05/Dec/2016:22:34:32
>
> >> +] "GET /connector-plugins HTTP/1.1" 200 315  2
>
> >> (org.apache.kafka.connect.runtime.rest.RestServer:60)
>
> >> [2016-12-05 15:05:25,422] INFO 0:0:0:0:0:0:0:1 - - [05/Dec/2016:23:05:25
>
> >> +] "GET /connector-plugins HTTP/1.1" 200 315  3
>
> >> (org.apache.kafka.connect.runtime.rest.RestServer:60)
>
> >> [2016-12-05 15:05:28,389] INFO 0:0:0:0:0:0:0:1 - - [05/Dec/2016:23:05:28
>
> >> +] "GET /connector-plugins HTTP/1.1" 200 315  2
>
> >> (org.apache.kafka.connect.runtime.rest.RestServer:60)
>
> >> [2016-12-05 15:07:38,644] INFO 0:0:0:0:0:0:0:1 - - [05/Dec/2016:23:06:08
>
> >> +] "GET /connectors HTTP/1.1" 500 48  90003
> (org.apache.kafka.connect.
>
> >> runtime.rest.RestServer:60)
>
> >> [2016-12-05 15:07:44,450] INFO 0:0:0:0:0:0:0:1 - - [05/Dec/2016:23:07:44
>
> >> +] "GET /connector-plugins HTTP/1.1" 200 315  1
>
> >> (org.apache.kafka.connect.runtime.rest.RestServer:60)
>
> >> [2016-12-05 15:13:06,703] INFO 0:0:0:0:0:0:0:1 - - [05/Dec/2016:23:11:36
>
> >> +] "POST /connectors HTTP/1.1" 500 48  90003
> (org.apache.kafka.connect.
>
> >> runtime.rest.RestServer:60)
>
> >> [2016-12-05 15:15:38,506] INFO 0:0:0:0:0:0:0:1 - - [05/Dec/2016:23:14:08
>
> >> +] "POST /connectors HTTP/1.1" 500 48  90005
> (org.apache.kafka.connect.
>
> >> runtime.rest.RestServer:60)
>
> >>
>
> >> where you can see the er

Re: Creating a connector with Kafka Connect Distributed returning 500 error

2016-12-06 Thread Konstantine Karantasis
Hi Phillip,

may I ask which Kafka version did you use?

trunk repo in Apache Kafka contained briefly a bug in Connect framework
(during the past week) that produced failures similar to the one you
describe (only in distributed mode). A fix has been pushed since yesterday.

3) Some useful step-by-step information is provided in the quickstart guide
here:
https://kafka.apache.org/quickstart#quickstart_kafkaconnect

as well as in the documentation of Confluent:
http://docs.confluent.io/3.1.0/connect/quickstart.html#

Alternatively, you might want to follow the quickstart guide of one of the
open source connectors, here:
http://docs.confluent.io/3.1.0/connect/connectors.html

2) From what you mention above, it seems more like that you're hitting this
temporary bug. But again that depends on which Kafka version you've been
using.

1) Generating logs, in one of the debug levels (e.g. DEBUG, TRACE) is
usually a useful source of information.
Alternatively you may chose to run Connect in debug mode by setting the
environment variable KAFKA_DEBUG and attaching a remote debugger to it
(such as IntelliJ's remote debugging capability). With respect to live
debugging, we are planning to post a step-by-step guide for Kafka and Kafka
Connect soon.

Regards,
Konstantine

On Tue, Dec 6, 2016 at 11:22 AM, Phillip Mann  wrote:

> I am working on migrating from Camus to Kafka Connect. I am working on the
> implementation of Kafka Connect and specifically focused on distributed
> mode. I am able to start a worker successfully on my local machine which I
> assume communicates with my Kafka cluster. I am further able to run two GET
> commands such as / and /connector-plugins which return the correct JSON.
> However, when I try to POST a command to create a connector, I receive a
> 500 error and a time out. Specifically, I use this command to POST for
> testing:
>
> curl -X POST -H "Content-Type: application/json" --data '{"name":
> "local-file-sink", "config": {"connector.class":"FileStreamSinkConnector",
> "tasks.max":"1", "file":"test.sink.txt", "topics":"myTopic" }}'
> localhost:8083/connectors
>
> and eventually I get this response:
>
> {"error_code": 500, "message": "Request timed out"}
>
> I am lost as to what is going on. The logs from my Kafka Connect
> distributed worker show this:
>
> [2016-12-05 14:34:32,436] INFO 0:0:0:0:0:0:0:1 - - [05/Dec/2016:22:34:32
> +] "GET /connector-plugins HTTP/1.1" 200 315  2
> (org.apache.kafka.connect.runtime.rest.RestServer:60)
> [2016-12-05 15:05:25,422] INFO 0:0:0:0:0:0:0:1 - - [05/Dec/2016:23:05:25
> +] "GET /connector-plugins HTTP/1.1" 200 315  3
> (org.apache.kafka.connect.runtime.rest.RestServer:60)
> [2016-12-05 15:05:28,389] INFO 0:0:0:0:0:0:0:1 - - [05/Dec/2016:23:05:28
> +] "GET /connector-plugins HTTP/1.1" 200 315  2
> (org.apache.kafka.connect.runtime.rest.RestServer:60)
> [2016-12-05 15:07:38,644] INFO 0:0:0:0:0:0:0:1 - - [05/Dec/2016:23:06:08
> +] "GET /connectors HTTP/1.1" 500 48  90003 (org.apache.kafka.connect.
> runtime.rest.RestServer:60)
> [2016-12-05 15:07:44,450] INFO 0:0:0:0:0:0:0:1 - - [05/Dec/2016:23:07:44
> +] "GET /connector-plugins HTTP/1.1" 200 315  1
> (org.apache.kafka.connect.runtime.rest.RestServer:60)
> [2016-12-05 15:13:06,703] INFO 0:0:0:0:0:0:0:1 - - [05/Dec/2016:23:11:36
> +] "POST /connectors HTTP/1.1" 500 48  90003 (org.apache.kafka.connect.
> runtime.rest.RestServer:60)
> [2016-12-05 15:15:38,506] INFO 0:0:0:0:0:0:0:1 - - [05/Dec/2016:23:14:08
> +] "POST /connectors HTTP/1.1" 500 48  90005 (org.apache.kafka.connect.
> runtime.rest.RestServer:60)
>
> where you can see the error codes and the commands.
>
> I guess my main questions and issues are:
>
>   1.  How can I better debug Kafka Connect so I can try and fix this?
>   2.  Is there anything that I'm doing that is glaringly wrong?
>   3.  Is there any step-by-step documentation or blog posts on getting a
> Kafka Connect distributed worker and connector to run? I have not really
> seen anything or even best practices kinds of documentation? Maybe I am
> just way too early of an adopter.
>
> I look forward to hearing back from the community and thank you for your
> help!
>
>