[GitHub] [kafka-site] cc13ny closed pull request #345: Fix the formatting of example RocksDBConfigSetter

2021-04-05 Thread GitBox


cc13ny closed pull request #345:
URL: https://github.com/apache/kafka-site/pull/345


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka-site] cc13ny commented on pull request #345: Fix the formatting of example RocksDBConfigSetter

2021-04-05 Thread GitBox


cc13ny commented on pull request #345:
URL: https://github.com/apache/kafka-site/pull/345#issuecomment-813862721


   Thanks. I created https://github.com/apache/kafka/pull/10486. I will close 
it now. But will reopen if it's not picked before the 2.8 release. Also how can 
I know if it will be picked?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: Re: [DISCUSS] KIP-706: Add method "Producer#produce" to return CompletionStage instead of Future

2021-04-05 Thread Chia-Ping Tsai
hi Ismael,

In order to minimize the pain (and code changes), I remove all deprecations. 
The main purpose of this KIP is to introduce new API so I plan to keep using 
ProduceRecord in internal process. The new interfaces (SendRecord and 
SendTarget) are used by new API only and they are converted to ProducerRecord 
internally. The benefit is that we can focus on new API rather than a lot of 
changes to code base (for example, ProducerInterceptor and other classes which 
accept ProducerRecord).

On 2021/04/02 02:14:40, Ismael Juma  wrote: 
> To avoid a separate KIP, maybe we can agree on the grace period as part of
> this KIP. Maybe 3 releases (~1 year) is a good target?
> 
> Ismael
> 
> On Thu, Apr 1, 2021, 6:39 PM Chia-Ping Tsai  wrote:
> 
> > > Deprecating `send` is going to be extremely disruptive to all existing
> > > users (if you use -Werror, it will require updating every call site).
> > Have
> > > we considered encouraging the usage of the new method while not
> > deprecating
> > > the old methods? We could consider deprecation down the line. The
> > existing
> > > methods work fine for many people, it doesn't seem like a good idea to
> > > penalize them.
> > >
> > > Instead, we can make the new method available for people who benefit from
> > > it. After a grace period (3 releases), we can consider deprecating.
> > > Thoughts?
> >
> > Fair enough. will remove deprecation.
> >
> > On 2021/03/31 14:41:22, Ismael Juma  wrote:
> > > Hi Chia-Ping,
> > >
> > > Deprecating `send` is going to be extremely disruptive to all existing
> > > users (if you use -Werror, it will require updating every call site).
> > Have
> > > we considered encouraging the usage of the new method while not
> > deprecating
> > > the old methods? We could consider deprecation down the line. The
> > existing
> > > methods work fine for many people, it doesn't seem like a good idea to
> > > penalize them.
> > >
> > > Instead, we can make the new method available for people who benefit from
> > > it. After a grace period (3 releases), we can consider deprecating.
> > > Thoughts?
> > >
> > > Ismael
> > >
> > > On Tue, Mar 30, 2021 at 8:50 PM Chia-Ping Tsai 
> > wrote:
> > >
> > > > hi,
> > > >
> > > > I have updated KIP according to my latest response. I will start vote
> > > > thread next week if there is no more comments :)
> > > >
> > > > Best Regards,
> > > > Chia-Ping
> > > >
> > > > On 2021/01/31 05:39:17, Chia-Ping Tsai  wrote:
> > > > > It seems to me changing the input type might make complicate the
> > > > migration from deprecated send method to new API.
> > > > >
> > > > > Personally, I prefer to introduce a interface called “SendRecord” to
> > > > replace ProducerRecord. Hence, the new API/classes is shown below.
> > > > >
> > > > > 1. CompletionStage send(SendRecord)
> > > > > 2. class ProducerRecord implement SendRecord
> > > > > 3. Introduce builder pattern for SendRecord
> > > > >
> > > > > That includes following benefit.
> > > > >
> > > > > 1. Kafka users who don’t use both return type and callback do not
> > need
> > > > to change code even though we remove deprecated send methods. (of
> > course,
> > > > they still need to compile code with new Kafka)
> > > > >
> > > > > 2. Kafka users who need Future can easily migrate to new API by regex
> > > > replacement. (cast ProduceRecord to SendCast and add
> > toCompletableFuture)
> > > > >
> > > > > 3. It is easy to support topic id in the future. We can add new
> > method
> > > > to SendRecord builder. For example:
> > > > >
> > > > > Builder topicName(String)
> > > > > Builder topicId(UUID)
> > > > >
> > > > > 4. builder pattern can make code more readable. Especially, Produce
> > > > record has a lot of fields which can be defined by users.
> > > > > —
> > > > > Chia-Ping
> > > > >
> > > > > On 2021/01/30 22:50:36 Ismael Juma wrote:
> > > > > > Another thing to think about: the consumer api currently has
> > > > > > `subscribe(String|Pattern)` and a number of methods that accept
> > > > > > `TopicPartition`. A similar approach could be used for the
> > Consumer to
> > > > work
> > > > > > with topic ids or topic names. The consumer side also has to
> > support
> > > > > > regexes so it probably makes sense to have a separate interface.
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > > On Sat, Jan 30, 2021 at 2:40 PM Ismael Juma 
> > wrote:
> > > > > >
> > > > > > > I think this is a promising idea. I'd personally avoid the
> > overload
> > > > and
> > > > > > > simply have a `Topic` type that implements `SendTarget`. It's a
> > mix
> > > > of both
> > > > > > > proposals: strongly typed, no overloads and general class names
> > that
> > > > > > > implement `SendTarget`.
> > > > > > >
> > > > > > > Ismael
> > > > > > >
> > > > > > > On Sat, Jan 30, 2021 at 2:22 PM Jason Gustafson <
> > ja...@confluent.io>
> > > > > > > wrote:
> > > > > > >
> > > > > > >> Giving this a little more thought, I imagine sending to a topic
> > is
> > > > the
> > > > > > >> most
> > >

Re: New Jenkins job for master and release branches

2021-04-05 Thread Luke Chen
Thanks Ismael!
I love each JDK+Scala combination is built in parallel!

Thanks.
Luke

On Mon, Apr 5, 2021 at 12:24 PM Gwen Shapira 
wrote:

> W00t! This is super awesome. Thank you so much!!!
>
> On Sun, Apr 4, 2021 at 2:22 PM Ismael Juma  wrote:
>
> > Hi all,
> >
> > As part of KAFKA-12614 <
> https://issues.apache.org/jira/browse/KAFKA-12614
> > >,
> > I have created a Jenkinsfile-based job for trunk, 2.8 and future release
> > branches:
> >
> > https://ci-builds.apache.org/job/Kafka/job/kafka/
> >
> > This has several advantages (many of these are already the case for PR
> > builds):
> >
> >1. The configuration is in source control.
> >2. PR and branch build configuration share most of the logic.
> >3. Unstable (tests failed) and unsuccessful (compile, checkstyle, etc.
> >failed) builds are given different status and color (red vs amber).
> >4. Reporting within a build is improved (each stage is shown as part
> of
> >the build graph)
> >5. Improved parallelism (each JDK+Scala combination is built in
> > parallel)
> >6. Release branches get better JDK version coverage (instead of only
> JDK
> >8 as it used to be)
> >7. Instead of creating a new job for each new release, we can adjust
> the
> >configuration to allow the new release branch.
> >
> > There is currently an open PR <
> https://github.com/apache/kafka/pull/10473>
> > to
> > extend the Jenkinsfile with functionality desired for branch builds. Once
> > that is merged and has been shown to work correctly, I will delete legacy
> > Jenkins jobs like:
> >
> >- https://ci-builds.apache.org/job/Kafka/job/kafka-2.8-jdk8/
> >- https://ci-builds.apache.org/job/Kafka/job/kafka-trunk-jdk11/
> >
> > Let me know if you have questions or comments.
> >
> > Ismael
> >
>
>
> --
> Gwen Shapira
> Engineering Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: Question on kafka connect's incremental rebalance algorithm

2021-04-05 Thread Luke Chen
Hi Ahmed,
I think this bug KAFKA-12495
 is the issue you
described, which is under code review now.
If not, please open another JIRA ticket to track it.

Thanks.
Luke

On Tue, Apr 6, 2021 at 4:18 AM Thouqueer Ahmed <
thouqueer.ah...@maplelabs.com> wrote:

> Hi,
>   What would happen when new worker joins after the
> synchronization barrier ?
>
> As per code -> performTaskAssignment function of IncrementalAssignor ->
> Boolean canRevoke is false when it is called during the 2nd rebalance.
> Hence revocation is skipped and only assignment is done.
> This would lead to imbalance in #tasks/#connectors.
>
> How is this case handled ?
>
> Thanks,
> Ahmed
>
>


[jira] [Resolved] (KAFKA-8924) Default grace period (-1) of TimeWindows causes suppress to emit events after 24h

2021-04-05 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman resolved KAFKA-8924.
---
Resolution: Duplicate

> Default grace period (-1) of TimeWindows causes suppress to emit events after 
> 24h
> -
>
> Key: KAFKA-8924
> URL: https://issues.apache.org/jira/browse/KAFKA-8924
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Michał
>Assignee: Michał
>Priority: Major
>  Labels: needs-kip
>
> h2. Problem 
> The default creation of TimeWindows, like
> {code}
> TimeWindows.of(ofMillis(xxx))
> {code}
> calls an internal constructor
> {code}
> return new TimeWindows(sizeMs, sizeMs, -1, DEFAULT_RETENTION_MS);
> {code}
> And the *-1* parameter is the default grace period which I think is here for 
> backward compatibility
> {code}
> @SuppressWarnings("deprecation") // continuing to support 
> Windows#maintainMs/segmentInterval in fallback mode
> @Override
> public long gracePeriodMs() {
> // NOTE: in the future, when we remove maintainMs,
> // we should default the grace period to 24h to maintain the default 
> behavior,
> // or we can default to (24h - size) if you want to be super accurate.
> return graceMs != -1 ? graceMs : maintainMs() - size();
> }
> {code}
> The problem is that if you use a TimeWindows with gracePeriod of *-1* 
> together with suppress *untilWindowCloses*, it never emits an event.
> You can check the Suppress tests 
> (SuppressScenarioTest.shouldSupportFinalResultsForTimeWindows), where 
> [~vvcephei] was (maybe) aware of that and all the scenarios specify the 
> gracePeriod.
> I will add a test without it on my branch and it will fail.
> The test: 
> https://github.com/atais/kafka/commit/221a04dc40d636ffe93a0ad95dfc6bcad653f4db
>  
> h2. Now what can be done
> One easy fix would be to change the default value to 0, which works fine for 
> me in my project, however, I am not aware of the impact it would have done 
> due to the changes in the *gracePeriodMs* method mentioned before.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Please review 2.8.0 blog post

2021-04-05 Thread Sagar
Got it, Thanks for the explanation John!

Thanks!
Sagar.

On Tue, Apr 6, 2021 at 8:16 AM John Roesler  wrote:

> Oh, my apologies, Sagar,
>
> That link will not resolve until the release. The release
> notes that I prepared as part of the RC are available here:
> https://home.apache.org/~vvcephei/kafka-2.8.0-rc0/RELEASE_NOTES.html
>
> I used the post-release link because I didn't want to forget
> it, but now I see how that would be unexpected as a
> reviewer. I should have mentioned it in my first email.
>
> Along the same lines, the video link will also not be live
> until the release.
>
> Thank you for taking a look!
>
> -John
>
> On Tue, 2021-04-06 at 07:45 +0530, Sagar wrote:
> > Hi,
> >
> > I am not sure if others are experiencing it, but when I click on the
> > Release notes link provided on the page, it keeps throwing 404:
> >
> >
> https://dist.apache.org/repos/dist/release/kafka/2.8.0/RELEASE_NOTES.html
> >
> > Not sure if its an issue but thought I will call it out :D
> >
> > Thanks!
> > Sagar.
> >
> > On Mon, Apr 5, 2021 at 6:29 PM Adam Bellemare 
> > wrote:
> >
> > > Read it all. It looks good to me in terms of structure and content. I
> am
> > > not sufficiently up to date on all the features that are otherwise
> included
> > > in 2.8.0, but the ones listed seem very prominent!
> > >
> > >
> > >
> > > On Thu, Apr 1, 2021 at 4:39 PM John Roesler 
> wrote:
> > >
> > > > Hello all,
> > > >
> > > > In the steady march toward the Apache Kafka 2.8.0 release, I
> > > > have prepared a draft of the release announcement post:
> > > >
> > > >
> > >
> https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache5
> > > >
> > > > If you have a moment, I would greatly appreciate your
> > > > reviews.
> > > >
> > > > Thank you,
> > > > -John
> > > >
> > > >
> > >
>
>
>


[jira] [Resolved] (KAFKA-6603) Kafka streams off heap memory usage does not match expected values from configuration

2021-04-05 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman resolved KAFKA-6603.
---
Resolution: Fixed

> Kafka streams off heap memory usage does not match expected values from 
> configuration
> -
>
> Key: KAFKA-6603
> URL: https://issues.apache.org/jira/browse/KAFKA-6603
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Igor Calabria
>Priority: Minor
>
> Hi, I have a simple aggregation pipeline that's backed by the default state 
> store(rocksdb). The pipeline works fine except that off heap the memory usage 
> is way higher than expected. Following the 
> [documention|https://docs.confluent.io/current/streams/developer-guide/config-streams.html#streams-developer-guide-rocksdb-config]
>  has some effect(memory usage is reduced) but the values don't match at all. 
> The java process is set to run with just `-Xmx300m -Xms300m`  and rocksdb 
> config looks like this
> {code:java}
> tableConfig.setCacheIndexAndFilterBlocks(true);
> tableConfig.setBlockCacheSize(1048576); //1MB
> tableConfig.setBlockSize(16 * 1024); // 16KB
> options.setTableFormatConfig(tableConfig);
> options.setMaxWriteBufferNumber(2);
> options.setWriteBufferSize(8 * 1024); // 8KB{code}
> To estimate memory usage, I'm using this formula  
> {noformat}
> (block_cache_size + write_buffer_size * write_buffer_number) * segments * 
> partitions{noformat}
> Since my topic has 25 partitions with 3 segments each(it's a windowed store), 
> off heap memory usage should be about 76MB. What I'm seeing in production is 
> upwards of 300MB, even taking in consideration  extra overhead from rocksdb 
> compaction threads, this seems a bit high (especially when the disk usage for 
> all files is just 1GB) 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8165) Streams task causes Out Of Memory after connection issues and store restoration

2021-04-05 Thread A. Sophie Blee-Goldman (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

A. Sophie Blee-Goldman resolved KAFKA-8165.
---
Resolution: Fixed

> Streams task causes Out Of Memory after connection issues and store 
> restoration
> ---
>
> Key: KAFKA-8165
> URL: https://issues.apache.org/jira/browse/KAFKA-8165
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0
> Environment: 3 nodes, 22 topics, 16 partitions per topic, 1 window 
> store, 4 KV stores. 
> Kafka Streams application cluster: 3 AWS t2.large instances (8GB mem). 1 
> application instance, 2 threads per instance.
> Kafka 2.1, Kafka Streams 2.1
> Amazon Linux.
> Scala application, on Docker based on openJdk9. 
>Reporter: Di Campo
>Priority: Major
>
> Having a Kafka Streams 2.1 application, when Kafka brokers are stable, the 
> (largely stateful) application has been consuming ~160 messages per second at 
> a sustained rate for several hours. 
> However it started having connection issues to the brokers. 
> {code:java}
> Connection to node 3 (/172.31.36.118:9092) could not be established. Broker 
> may not be available. (org.apache.kafka.clients.NetworkClient){code}
> Also it began showing a lot of these errors: 
> {code:java}
> WARN [Consumer 
> clientId=stream-processor-81e1ce17-1765-49f8-9b44-117f983a2d19-StreamThread-2-consumer,
>  groupId=stream-processor] 1 partitions have leader brokers without a 
> matching listener, including [broker-2-health-check-0] 
> (org.apache.kafka.clients.NetworkClient){code}
> In fact, the _health-check_ topic is in the broker but not consumed by this 
> topology or used in any way by the Streams application (it is just broker 
> healthcheck). It does not complain about topics that are actually consumed by 
> the topology. 
> Some time after these errors (that appear at a rate of 24 appearances per 
> second during ~5 minutes), then the following logs appear: 
> {code:java}
> [2019-03-27 15:14:47,709] WARN [Consumer 
> clientId=stream-processor-81e1ce17-1765-49f8-9b44-117f983a2d19-StreamThread-1-restore-consumer,
>  groupId=] Connection to node -3 (/ip3:9092) could not be established. Broker 
> may not be available. (org.apache.kafka.clients.NetworkClient){code}
> In between 6 and then 3 lines of "Connection could not be established" error 
> messages, 3 of these ones slipped in: 
> {code:java}
> [2019-03-27 15:14:47,723] WARN Started Restoration of visitorCustomerStore 
> partition 15 total records to be restored 17 
> (com.divvit.dp.streams.applications.monitors.ConsoleGlobalRestoreListener){code}
>  
>  ... one for each different KV store I have (I still have another KV that 
> does not appear, and a WindowedStore store that also does not appear). 
>  Then I finally see "Restoration Complete" (using a logging 
> ConsoleGlobalRestoreListener as in docs) messages for all of my stores. So it 
> seems it may be fine now to restart the processing.
> Three minutes later, some events get processed, and I see an OOM error:  
> {code:java}
> java.lang.OutOfMemoryError: GC overhead limit exceeded{code}
>  
> ... so given that it usually allows to process during hours under same 
> circumstances, I'm wondering whether there is some memory leak in the 
> connection resources or somewhere in the handling of this scenario.
> Kafka and KafkaStreams 2.1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Please review 2.8.0 blog post

2021-04-05 Thread John Roesler
Oh, my apologies, Sagar,

That link will not resolve until the release. The release
notes that I prepared as part of the RC are available here:
https://home.apache.org/~vvcephei/kafka-2.8.0-rc0/RELEASE_NOTES.html

I used the post-release link because I didn't want to forget
it, but now I see how that would be unexpected as a
reviewer. I should have mentioned it in my first email.

Along the same lines, the video link will also not be live
until the release.

Thank you for taking a look!

-John

On Tue, 2021-04-06 at 07:45 +0530, Sagar wrote:
> Hi,
> 
> I am not sure if others are experiencing it, but when I click on the
> Release notes link provided on the page, it keeps throwing 404:
> 
> https://dist.apache.org/repos/dist/release/kafka/2.8.0/RELEASE_NOTES.html
> 
> Not sure if its an issue but thought I will call it out :D
> 
> Thanks!
> Sagar.
> 
> On Mon, Apr 5, 2021 at 6:29 PM Adam Bellemare 
> wrote:
> 
> > Read it all. It looks good to me in terms of structure and content. I am
> > not sufficiently up to date on all the features that are otherwise included
> > in 2.8.0, but the ones listed seem very prominent!
> > 
> > 
> > 
> > On Thu, Apr 1, 2021 at 4:39 PM John Roesler  wrote:
> > 
> > > Hello all,
> > > 
> > > In the steady march toward the Apache Kafka 2.8.0 release, I
> > > have prepared a draft of the release announcement post:
> > > 
> > > 
> > https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache5
> > > 
> > > If you have a moment, I would greatly appreciate your
> > > reviews.
> > > 
> > > Thank you,
> > > -John
> > > 
> > > 
> > 




Re: [VOTE] KIP-633: Drop 24 hour default of grace period in Streams

2021-04-05 Thread John Roesler
Thanks, Sophie,

I’m +1 (binding)

-John

On Mon, Apr 5, 2021, at 21:34, Sophie Blee-Goldman wrote:
> Hey all,
> 
> I'd like to start the voting on KIP-633, to drop the awkward 24 hour grace
> period and improve the API to raise visibility on an important concept in
> Kafka Streams: grace period nad out-of-order data handling.
> 
> Here's the KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Drop+24+hour+default+of+grace+period+in+Streams
> 
> 
> Cheers,
> Sophie
>


Re: [VOTE] KIP-725: Streamlining configurations for TimeWindowedDeserializer.

2021-04-05 Thread John Roesler
Thanks, Sagar!

I’m +1 (binding)

-John

On Mon, Apr 5, 2021, at 21:35, Sophie Blee-Goldman wrote:
> Thanks for the KIP! +1 (binding) from me
> 
> Cheers,
> Sophie
> 
> On Mon, Apr 5, 2021 at 7:13 PM Sagar  wrote:
> 
> > Hi All,
> >
> > I would like to start voting on the following KIP:
> >
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177047930
> >
> > Thanks!
> > Sagar.
> >
>


Re: [VOTE] KIP-725: Streamlining configurations for TimeWindowedDeserializer.

2021-04-05 Thread Sophie Blee-Goldman
Thanks for the KIP! +1 (binding) from me

Cheers,
Sophie

On Mon, Apr 5, 2021 at 7:13 PM Sagar  wrote:

> Hi All,
>
> I would like to start voting on the following KIP:
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177047930
>
> Thanks!
> Sagar.
>


[VOTE] KIP-633: Drop 24 hour default of grace period in Streams

2021-04-05 Thread Sophie Blee-Goldman
Hey all,

I'd like to start the voting on KIP-633, to drop the awkward 24 hour grace
period and improve the API to raise visibility on an important concept in
Kafka Streams: grace period nad out-of-order data handling.

Here's the KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-633%3A+Drop+24+hour+default+of+grace+period+in+Streams


Cheers,
Sophie


Re: [DISCUSS] Please review 2.8.0 blog post

2021-04-05 Thread Sagar
Hi,

I am not sure if others are experiencing it, but when I click on the
Release notes link provided on the page, it keeps throwing 404:

https://dist.apache.org/repos/dist/release/kafka/2.8.0/RELEASE_NOTES.html

Not sure if its an issue but thought I will call it out :D

Thanks!
Sagar.

On Mon, Apr 5, 2021 at 6:29 PM Adam Bellemare 
wrote:

> Read it all. It looks good to me in terms of structure and content. I am
> not sufficiently up to date on all the features that are otherwise included
> in 2.8.0, but the ones listed seem very prominent!
>
>
>
> On Thu, Apr 1, 2021 at 4:39 PM John Roesler  wrote:
>
> > Hello all,
> >
> > In the steady march toward the Apache Kafka 2.8.0 release, I
> > have prepared a draft of the release announcement post:
> >
> >
> https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache5
> >
> > If you have a moment, I would greatly appreciate your
> > reviews.
> >
> > Thank you,
> > -John
> >
> >
>


[VOTE] KIP-725: Streamlining configurations for TimeWindowedDeserializer.

2021-04-05 Thread Sagar
Hi All,

I would like to start voting on the following KIP:

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=177047930

Thanks!
Sagar.


Jenkins build is back to normal : Kafka » kafka-2.6-jdk8 #114

2021-04-05 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #671

2021-04-05 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12294; forward auto topic request within envelope on behalf of 
clients (#10142)


--
[...truncated 3.70 MB...]

KafkaZkClientTest > testSetTopicPartitionStatesRaw() PASSED

KafkaZkClientTest > testAclManagementMethods() STARTED

KafkaZkClientTest > testAclManagementMethods() PASSED

KafkaZkClientTest > testPreferredReplicaElectionMethods() STARTED

KafkaZkClientTest > testPreferredReplicaElectionMethods() PASSED

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAclStoreTest > shouldRoundTripChangeNode() STARTED

LiteralAclStoreTest > shouldRoundTripChangeNode() PASSED

LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral() STARTED

LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral() PASSED

LiteralAclStoreTest > shouldWriteChangesToTheWritePath() STARTED

LiteralAclStoreTest > shouldWriteChangesToTheWritePath() PASSED

LiteralAclStoreTest > shouldHaveCorrectPatternType() STARTED

LiteralAclStoreTest > shouldHaveCorrectPatternType() PASSED

LiteralAclStoreTest > shouldDecodeResourceUsingTwoPartLogic() STARTED

Lit

[GitHub] [kafka-site] ableegoldman commented on pull request #345: Fix the formatting of example RocksDBConfigSetter

2021-04-05 Thread GitBox


ableegoldman commented on pull request #345:
URL: https://github.com/apache/kafka-site/pull/345#issuecomment-813747095


   Hey, thanks for the fix but please revert the 28 folder. For one thing, we 
don't want the 2.8 docs to go live until 2.8 is actually released. For another, 
this is just a copy of the 2.7 docs, the 2.8 docs are generated and copied over 
from the kafka repo during the release process. Also this may mess with the 
release script/release process, but I'm not sure about that. 
   
   I just left a comment on the ticket but I'll reiterate it here: I would 
recommend to first submit a PR to fix this in the [kafka 
repo](https://github.com/apache/kafka) rather than in kafka-site, due to the 
ongoing 2.8 release. If the PR against the kafka repo can be merged in the next 
day or so, then it will get picked up in the 2.8 release and you won't even 
need to open a PR against the kafka-site repo at all. If we don't get it in 
before the 2.8 release, then you'll have to do the fix again in a PR against 
the kafka-site repo to fix it in the 28 folder that's generated during the 2.8 
release


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: Kafka » Kafka PR Builder » PR-10479 #2

2021-04-05 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 206153 lines...]
[2021-04-06T00:35:39.709Z] > Task :streams:srcJar
[2021-04-06T00:35:39.709Z] > Task :streams:processMessages UP-TO-DATE
[2021-04-06T00:35:39.709Z] > Task :streams:processTestResources UP-TO-DATE
[2021-04-06T00:35:39.709Z] > Task :clients:srcJar
[2021-04-06T00:35:39.709Z] > Task :clients:processMessages UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :clients:compileJava UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :clients:classes UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :clients:jar UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :clients:processTestMessages UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :connect:api:compileJava UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :connect:api:classes UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :metadata:compileJava UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :connect:json:compileJava UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :connect:json:classes UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :connect:json:javadoc SKIPPED
[2021-04-06T00:35:40.910Z] > Task :connect:json:javadocJar
[2021-04-06T00:35:40.910Z] > Task :raft:compileJava UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :clients:compileTestJava UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :clients:testClasses UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :streams:compileJava UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :streams:classes UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :connect:json:compileTestJava UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :connect:json:testClasses UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task :connect:json:testJar
[2021-04-06T00:35:40.910Z] > Task :connect:json:testSrcJar
[2021-04-06T00:35:40.910Z] > Task :streams:test-utils:compileJava UP-TO-DATE
[2021-04-06T00:35:40.910Z] > Task 
:clients:generateMetadataFileForMavenJavaPublication
[2021-04-06T00:35:43.142Z] > Task :core:compileJava
[2021-04-06T00:35:45.520Z] > Task :connect:api:javadoc
[2021-04-06T00:35:45.520Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2021-04-06T00:35:45.520Z] > Task :connect:api:jar UP-TO-DATE
[2021-04-06T00:35:45.520Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2021-04-06T00:35:45.520Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2021-04-06T00:35:45.520Z] > Task :connect:json:jar UP-TO-DATE
[2021-04-06T00:35:45.520Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2021-04-06T00:35:45.520Z] > Task :connect:api:javadocJar
[2021-04-06T00:35:45.520Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2021-04-06T00:35:45.520Z] > Task :connect:json:publishToMavenLocal
[2021-04-06T00:35:45.520Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2021-04-06T00:35:45.520Z] > Task :connect:api:testClasses UP-TO-DATE
[2021-04-06T00:35:45.520Z] > Task :connect:api:testJar
[2021-04-06T00:35:45.520Z] > Task :connect:api:testSrcJar
[2021-04-06T00:35:45.520Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2021-04-06T00:35:45.520Z] > Task :connect:api:publishToMavenLocal
[2021-04-06T00:35:47.541Z] > Task :streams:javadoc
[2021-04-06T00:35:47.541Z] > Task :streams:copyDependantLibs
[2021-04-06T00:35:47.541Z] > Task :streams:jar UP-TO-DATE
[2021-04-06T00:35:47.541Z] > Task 
:streams:generateMetadataFileForMavenJavaPublication
[2021-04-06T00:35:48.550Z] > Task :streams:javadocJar
[2021-04-06T00:35:50.497Z] > Task :clients:javadoc
[2021-04-06T00:35:50.497Z] > Task :clients:javadocJar
[2021-04-06T00:35:51.686Z] > Task :clients:testJar
[2021-04-06T00:35:51.686Z] > Task :clients:testSrcJar
[2021-04-06T00:35:51.686Z] > Task 
:clients:publishMavenJavaPublicationToMavenLocal
[2021-04-06T00:35:51.686Z] > Task :clients:publishToMavenLocal
[2021-04-06T00:36:14.210Z] > Task :core:compileScala
[2021-04-06T00:37:16.205Z] > Task :core:classes
[2021-04-06T00:37:16.205Z] > Task :core:compileTestJava NO-SOURCE
[2021-04-06T00:37:45.118Z] > Task :core:compileTestScala
[2021-04-06T00:38:30.880Z] > Task :core:testClasses
[2021-04-06T00:39:00.659Z] > Task :streams:compileTestJava
[2021-04-06T00:40:01.029Z] > Task :streams:testClasses
[2021-04-06T00:40:01.029Z] > Task :streams:testJar
[2021-04-06T00:40:01.029Z] > Task :streams:testSrcJar
[2021-04-06T00:40:01.029Z] > Task 
:streams:publishMavenJavaPublicationToMavenLocal
[2021-04-06T00:40:01.029Z] > Task :streams:publishToMavenLocal
[2021-04-06T00:40:01.029Z] 
[2021-04-06T00:40:01.029Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 7.0.
[2021-04-06T00:40:01.029Z] Use '--warning-mode all' to show the individual 
deprecation warnings.
[2021-04-06T00:40:01.029Z] See 
https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings
[2021-04-06T00:40:01.029Z] 
[2021-04-06T00:40:01.029Z] BUILD SUCCESSFUL in 4m 30s
[2021-04-06T00:40:01.029Z] 68 actionable tasks: 38 executed, 30 up-t

[GitHub] [kafka-site] cc13ny opened a new pull request #345: Fix the formatting of example RocksDBConfigSetter

2021-04-05 Thread GitBox


cc13ny opened a new pull request #345:
URL: https://github.com/apache/kafka-site/pull/345


   https://issues.apache.org/jira/browse/KAFKA-12492
   
   1. Fix the formatting of example RocksDBConfigSetter due to the un-arranged 
spaces within `` tag. 
   2. Update it in the folder 27 so that the difference can be reviewed.
   3. Duplicate the whole updated 27 folder as 28 folder so that it can be 
merged before the ongoing 2.8 is released.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [DISCUSS] KIP-718: Make KTable Join on Foreign key unopinionated

2021-04-05 Thread Guozhang Wang
Thanks Marco / John,

I think the arguments for not piggy-backing on the existing Materialized
makes sense; on the other hand, if we go this route should we just use a
separate Materialized than using an extended /
narrowed-scoped MaterializedSubscription since it seems we want to allow
users to fully customize this store?

Guozhang

On Thu, Apr 1, 2021 at 5:28 PM John Roesler  wrote:

> Thanks Marco,
>
> Sorry if I caused any trouble!
>
> I don’t remember what I was thinking before, but reasoning about it now,
> you might need the fine-grained choice if:
>
> 1. The number or size of records in each partition of both tables is
> small(ish), but the cardinality of the join is very high. Then you might
> want an in-memory table store, but an on-disk subscription store.
>
> 2. The number or size of records is very large, but the join cardinality
> is low. Then you might need an on-disk table store, but an in-memory
> subscription store.
>
> 3. You might want a different kind (or differently configured) store for
> the subscription store, since it’s access pattern is so different.
>
> If you buy these, it might be good to put the justification into the KIP.
>
> I’m in favor of the default you’ve proposed.
>
> Thanks,
> John
>
> On Mon, Mar 29, 2021, at 04:24, Marco Aurélio Lotz wrote:
> > Hi Guozhang,
> >
> > Apologies for the late answer. Originally that was my proposal - to
> > piggyback on the provided materialisation method (
> > https://issues.apache.org/jira/browse/KAFKA-10383).
> > John Roesler suggested to us to provide even further fine tuning on API
> > level parameters. Maybe we could see this as two sides of the same coin:
> >
> > - On the current API, we change it to piggy back on the materialization
> > method provided to the join store.
> > - We extend the API to allow a user to fine tune different
> materialization
> > methods for subscription and join store.
> >
> > What do you think?
> >
> > Cheers,
> > Marco
> >
> > On Thu, Mar 4, 2021 at 8:04 PM Guozhang Wang  wrote:
> >
> > > Thanks Marco,
> > >
> > > Just a quick thought: what if we reuse the existing Materialized
> object for
> > > both subscription and join stores, instead of introducing a new param /
> > > class?
> > >
> > > Guozhang
> > >
> > > On Tue, Mar 2, 2021 at 1:07 AM Marco Aurélio Lotz <
> cont...@marcolotz.com>
> > > wrote:
> > >
> > > > Hi folks,
> > > >
> > > > I would like to invite everyone to discuss further KIP-718:
> > > >
> > > >
> > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-718%3A+Make+KTable+Join+on+Foreign+key+unopinionated
> > > >
> > > > I welcome all feedback on it.
> > > >
> > > > Kind Regards,
> > > > Marco Lotz
> > > >
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
>


-- 
-- Guozhang


Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #670

2021-04-05 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12548; Propagate record error messages to application (#10445)


--
[...truncated 7.37 MB...]

BrokerEndPointTest > testFromJsonV5() PASSED

PartitionLockTest > testNoLockContentionWithoutIsrUpdate() STARTED

PartitionLockTest > testNoLockContentionWithoutIsrUpdate() PASSED

PartitionLockTest > testAppendReplicaFetchWithUpdateIsr() STARTED

PartitionLockTest > testAppendReplicaFetchWithUpdateIsr() PASSED

PartitionLockTest > testAppendReplicaFetchWithSchedulerCheckForShrinkIsr() 
STARTED

PartitionLockTest > testAppendReplicaFetchWithSchedulerCheckForShrinkIsr() 
PASSED

PartitionLockTest > testGetReplicaWithUpdateAssignmentAndIsr() STARTED

PartitionLockTest > testGetReplicaWithUpdateAssignmentAndIsr() PASSED

JsonValueTest > testJsonObjectIterator() STARTED

JsonValueTest > testJsonObjectIterator() PASSED

JsonValueTest > testDecodeLong() STARTED

JsonValueTest > testDecodeLong() PASSED

JsonValueTest > testAsJsonObject() STARTED

JsonValueTest > testAsJsonObject() PASSED

JsonValueTest > testDecodeDouble() STARTED

JsonValueTest > testDecodeDouble() PASSED

JsonValueTest > testDecodeOption() STARTED

JsonValueTest > testDecodeOption() PASSED

JsonValueTest > testDecodeString() STARTED

JsonValueTest > testDecodeString() PASSED

JsonValueTest > testJsonValueToString() STARTED

JsonValueTest > testJsonValueToString() PASSED

JsonValueTest > testAsJsonObjectOption() STARTED

JsonValueTest > testAsJsonObjectOption() PASSED

JsonValueTest > testAsJsonArrayOption() STARTED

JsonValueTest > testAsJsonArrayOption() PASSED

JsonValueTest > testAsJsonArray() STARTED

JsonValueTest > testAsJsonArray() PASSED

JsonValueTest > testJsonValueHashCode() STARTED

JsonValueTest > testJsonValueHashCode() PASSED

JsonValueTest > testDecodeInt() STARTED

JsonValueTest > testDecodeInt() PASSED

JsonValueTest > testDecodeMap() STARTED

JsonValueTest > testDecodeMap() PASSED

JsonValueTest > testDecodeSeq() STARTED

JsonValueTest > testDecodeSeq() PASSED

JsonValueTest > testJsonObjectGet() STARTED

JsonValueTest > testJsonObjectGet() PASSED

JsonValueTest > testJsonValueEquals() STARTED

JsonValueTest > testJsonValueEquals() PASSED

JsonValueTest > testJsonArrayIterator() STARTED

JsonValueTest > testJsonArrayIterator() PASSED

JsonValueTest > testJsonObjectApply() STARTED

JsonValueTest > testJsonObjectApply() PASSED

JsonValueTest > testDecodeBoolean() STARTED

JsonValueTest > testDecodeBoolean() PASSED

PasswordEncoderTest > testEncoderConfigChange() STARTED

PasswordEncoderTest > testEncoderConfigChange() PASSED

PasswordEncoderTest > testEncodeDecodeAlgorithms() STARTED

PasswordEncoderTest > testEncodeDecodeAlgorithms() PASSED

PasswordEncoderTest > testEncodeDecode() STARTED

PasswordEncoderTest > testEncodeDecode() PASSED

ThrottlerTest > testThrottleDesiredRate() STARTED

ThrottlerTest > testThrottleDesiredRate() PASSED

LoggingTest > testLoggerLevelIsResolved() STARTED

LoggingTest > testLoggerLevelIsResolved() PASSED

LoggingTest > testLog4jControllerIsRegistered() STARTED

LoggingTest > testLog4jControllerIsRegistered() PASSED

LoggingTest > testTypeOfGetLoggers() STARTED

LoggingTest > testTypeOfGetLoggers() PASSED

LoggingTest > testLogName() STARTED

LoggingTest > testLogName() PASSED

LoggingTest > testLogNameOverride() STARTED

LoggingTest > testLogNameOverride() PASSED

TimerTest > testAlreadyExpiredTask() STARTED

TimerTest > testAlreadyExpiredTask() PASSED

TimerTest > testTaskExpiration() STARTED

TimerTest > testTaskExpiration() PASSED

ReplicationUtilsTest > testUpdateLeaderAndIsr() STARTED

ReplicationUtilsTest > testUpdateLeaderAndIsr() PASSED

TopicFilterTest > testIncludeLists() STARTED

TopicFilterTest > testIncludeLists() PASSED

RaftManagerTest > testShutdownIoThread() STARTED

RaftManagerTest > testShutdownIoThread() PASSED

RaftManagerTest > testUncaughtExceptionInIoThread() STARTED

RaftManagerTest > testUncaughtExceptionInIoThread() PASSED

RequestChannelTest > testNonAlterRequestsNotTransformed() STARTED

RequestChannelTest > testNonAlterRequestsNotTransformed() PASSED

RequestChannelTest > testAlterRequests() STARTED

RequestChannelTest > testAlterRequests() PASSED

RequestChannelTest > testJsonRequests() STARTED

RequestChannelTest > testJsonRequests() PASSED

RequestChannelTest > testIncrementalAlterRequests() STARTED

RequestChannelTest > testIncrementalAlterRequests() PASSED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 STARTED

ControllerContextTest > 
testPartitionFullReplicaAssignmentReturnsEmptyAssignmentIfTopicOrPartitionDoesNotExist()
 PASSED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist() 
STARTED

ControllerContextTest > 
testPartitionReplicaAssignmentForTopicReturnsEmptyMapIfTopicDoesNotExist

Re: Known issues with time stepping

2021-04-05 Thread Adam Bellemare
Did you check the JIRAs?

On Mon, Apr 5, 2021 at 6:58 PM Tirtha Chatterjee <
tirtha.p.chatter...@gmail.com> wrote:

> Hi team
>
> Are there any known issues with Kafka that can happen because of the system
> time getting stepped forward or backward in one shot? This can happen
> because of NTP time syncs.
>
> --
> Regards
> Tirtha Chatterjee
>


Known issues with time stepping

2021-04-05 Thread Tirtha Chatterjee
Hi team

Are there any known issues with Kafka that can happen because of the system
time getting stepped forward or backward in one shot? This can happen
because of NTP time syncs.

-- 
Regards
Tirtha Chatterjee


[jira] [Resolved] (KAFKA-12294) Consider using the forwarding mechanism for metadata auto topic creation

2021-04-05 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-12294.
-
Fix Version/s: 3.0.0
   Resolution: Fixed

> Consider using the forwarding mechanism for metadata auto topic creation
> 
>
> Key: KAFKA-12294
> URL: https://issues.apache.org/jira/browse/KAFKA-12294
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Boyang Chen
>Priority: Major
> Fix For: 3.0.0
>
>
> Once [https://github.com/apache/kafka/pull/9579] is merged, there is a way to 
> improve the topic creation auditing by forwarding the CreateTopicsRequest 
> inside Envelope for the given client. Details in 
> [here|https://github.com/apache/kafka/pull/9579#issuecomment-772283780]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8551) Comments for connectors() in Herder interface

2021-04-05 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8551.
--
Resolution: Won't Fix

Marking as won't fix, since the details are insufficient to try to address.

> Comments for connectors() in Herder interface 
> --
>
> Key: KAFKA-8551
> URL: https://issues.apache.org/jira/browse/KAFKA-8551
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.2.1
>Reporter: Luying Liu
>Priority: Major
>
> There are mistakes in the comments for connectors() in Herder interface.  The 
> mistakes are in the  file 
> [kafka|https://github.com/apache/kafka]/[connect|https://github.com/apache/kafka/tree/trunk/connect]/[runtime|https://github.com/apache/kafka/tree/trunk/connect/runtime]/[src|https://github.com/apache/kafka/tree/trunk/connect/runtime/src]/[main|https://github.com/apache/kafka/tree/trunk/connect/runtime/src/main]/[java|https://github.com/apache/kafka/tree/trunk/connect/runtime/src/main/java]/[org|https://github.com/apache/kafka/tree/trunk/connect/runtime/src/main/java/org]/[apache|https://github.com/apache/kafka/tree/trunk/connect/runtime/src/main/java/org/apache]/[kafka|https://github.com/apache/kafka/tree/trunk/connect/runtime/src/main/java/org/apache/kafka]/[connect|https://github.com/apache/kafka/tree/trunk/connect/runtime/src/main/java/org/apache/kafka/connect]/[runtime|https://github.com/apache/kafka/tree/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime]/*Herder.java.*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8664) non-JSON format messages when streaming data from Kafka to Mongo

2021-04-05 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8664.
--
Resolution: Won't Fix

The reported problem is for a connector implementation that is not owned by the 
Apache Kafka project. Please report the issue with the provider of the 
connector.

> non-JSON format messages when streaming data from Kafka to Mongo
> 
>
> Key: KAFKA-8664
> URL: https://issues.apache.org/jira/browse/KAFKA-8664
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.2.0
>Reporter: Vu Le
>Priority: Major
> Attachments: MongoSinkConnector.properties, 
> log_error_when_stream_data_not_a_json_format.txt
>
>
> Hi team,
> I can stream data from Kafka to MongoDB with JSON messages. I use MongoDB 
> Kafka Connector 
> ([https://github.com/mongodb/mongo-kafka/blob/master/docs/install.md])
> However, if I send a non-JSON format message the Connector died. Please see 
> the log file for details.
> My config file:
> {code:java}
> name=mongo-sink
> topics=testconnector.class=com.mongodb.kafka.connect.MongoSinkConnector
> tasks.max=1
> key.ignore=true
> # Specific global MongoDB Sink Connector configuration
> connection.uri=mongodb://localhost:27017
> database=test_kafka
> collection=transaction
> max.num.retries=3
> retries.defer.timeout=5000
> type.name=kafka-connect
> key.converter=org.apache.kafka.connect.json.JsonConverter
> key.converter.schemas.enable=false
> value.converter=org.apache.kafka.connect.json.JsonConverter
> value.converter.schemas.enable=false
> {code}
> I have 2 separated questions:  
>  # how to ignore the message which is non-json format?
>  # how to defined a default-key for this kind of message (for example: abc -> 
> \{ "non-json": "abc" } )
> Thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8867) Kafka Connect JDBC fails to create PostgreSQL table with default boolean value in schema

2021-04-05 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8867.
--
Resolution: Won't Fix

The reported problem is for the Confluent JDBC source/sink connector, and 
should be reported via that connector's GitHub repository issues.

> Kafka Connect JDBC fails to create PostgreSQL table with default boolean 
> value in schema
> 
>
> Key: KAFKA-8867
> URL: https://issues.apache.org/jira/browse/KAFKA-8867
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Tudor
>Priority: Major
>
> The `CREATE TABLE ..` statement generated for JDBC sink connectors when 
> configured with `auto.create: true` generates field declarations that do not 
> conform to allowed PostgreSQL syntax when considering fields of type boolean 
> with default values.
> Example record value Avro schema:
> {code:java}
> {
>   "namespace": "com.test.avro.schema.v1",
>   "type": "record",
>   "name": "SomeEvent",
>   "fields": [
> {
>   "name": "boolean_field",
>   "type": "boolean",
>   "default": false
> }
>   ]
> }
> {code}
> The connector task fails with:  
> {code:java}
> ERROR WorkerSinkTask{id=test-events-sink-0} RetriableException from SinkTask: 
> (org.apache.kafka.connect.runtime.WorkerSinkTask:551)
> org.apache.kafka.connect.errors.RetriableException: java.sql.SQLException: 
> org.postgresql.util.PSQLException: ERROR: column "boolean_field" is of type 
> boolean but default expression is of type integer
>   Hint: You will need to rewrite or cast the expression.
>   at io.confluent.connect.jdbc.sink.JdbcSinkTask.put(JdbcSinkTask.java:93)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:538)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
>   at 
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
>   at 
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
>   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748){code}
>  
> The generated SQL statement is: 
> {code:java}
> CREATE TABLE "test_data" ("boolean_field" BOOLEAN DEFAULT 0){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8961) Unable to create secure JDBC connection through Kafka Connect

2021-04-05 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-8961.
--
Resolution: Won't Fix

This is not a problem of the Connect framework, and is instead an issue with 
the connector implementation – or more likely the _installation_ of the 
connector in the user's environment.

> Unable to create secure JDBC connection through Kafka Connect
> -
>
> Key: KAFKA-8961
> URL: https://issues.apache.org/jira/browse/KAFKA-8961
> Project: Kafka
>  Issue Type: Bug
>  Components: build, clients, KafkaConnect, network
>Affects Versions: 2.2.1
>Reporter: Monika Bainsala
>Priority: Major
>
> As per below article for enabling JDBC secure connection, we can use updated 
> URL parameter while calling the create connector REST API.
> Exampl:
> jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL=tcp)(HOST=X)(PORT=1520)))(CONNECT_DATA=(SERVICE_NAME=XXAP)));EncryptionLevel=requested;EncryptionTypes=RC4_256;DataIntegrityLevel=requested;DataIntegrityTypes=MD5"
>  
> But this approach is not working currently, kindly help in resolving this 
> issue.
>  
> Reference :
> [https://docs.confluent.io/current/connect/kafka-connect-jdbc/source-connector/source_config_options.html]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-10715) Support Kafka connect converter for AVRO

2021-04-05 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch resolved KAFKA-10715.
---
Resolution: Won't Do

> Support Kafka connect converter for AVRO
> 
>
> Key: KAFKA-10715
> URL: https://issues.apache.org/jira/browse/KAFKA-10715
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect
>Reporter: Ravindranath Kakarla
>Priority: Minor
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> I want to add support for Avro data format converter to Kafka Connect. Right 
> now, Kafka connect supports [JSON 
> converter|[https://github.com/apache/kafka/tree/trunk/connect].] Since, Avro 
> is a commonly used data format with Kafka, it will be great to have support 
> for it. 
>  
> Confluent Schema Registry libraries have 
> [support|https://github.com/confluentinc/schema-registry/blob/master/avro-converter/src/main/java/io/confluent/connect/avro/AvroConverter.java]
>  for it. The code seems to be pretty generic and can be used directly with 
> Kafka connect without schema registry. They are also licensed under Apache 
> 2.0.
>  
> Can they be copied to this repository and made available for all users of 
> Kafka Connect?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12619) Ensure LeaderChange message is committed before initializing high watermark

2021-04-05 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12619:
---

 Summary: Ensure LeaderChange message is committed before 
initializing high watermark
 Key: KAFKA-12619
 URL: https://issues.apache.org/jira/browse/KAFKA-12619
 Project: Kafka
  Issue Type: Bug
Reporter: Jason Gustafson
Assignee: Jason Gustafson


KIP-595 describes an extra condition on commitment here: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-595%3A+A+Raft+Protocol+for+the+Metadata+Quorum#KIP595:ARaftProtocolfortheMetadataQuorum-Fetch.
 In order to ensure that the leader's committed entries cannot get lost, it 
must commit one record from its own epoch. This guarantees that its latest 
entry is larger (in terms of epoch/offset) than any previously written record 
which ensures that any future leader must also include it. This is the purpose 
of the LeaderChange record which is written to the log as soon as the leader 
gets elected.

We have this check implemented here: 
https://github.com/apache/kafka/blob/trunk/raft/src/main/java/org/apache/kafka/raft/LeaderState.java#L122.
 However, the check needs to be a strict inequality since the epoch start 
offset does not reflect the LeaderChange record itself. In other words, the 
check is off by one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12618) Convert LogManager (and other EasyMocks) in ReplicaManagerTest to Mockito

2021-04-05 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-12618:
--

 Summary: Convert LogManager (and other EasyMocks) in 
ReplicaManagerTest to Mockito
 Key: KAFKA-12618
 URL: https://issues.apache.org/jira/browse/KAFKA-12618
 Project: Kafka
  Issue Type: Task
Reporter: Justine Olshan


[This 
commit|https://github.com/apache/kafka/commit/40f001cc537d6ff2efa71e609c2f84c6b934994d]
 introduced changes that have Partition calling getLog when there is no topic 
ID associated to the Partition. In this case, getLog will use a default 
argument. EasyMock (a Java framework) does not play well with scala's default 
arguments. For now, we are manually creating a partition and associating it in 
the initializeLogAndTopicId method. But a better long term solution is to use 
Mockito which better supports default arguments.

It would be good to convert all EasyMocks over to mockito as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Question on kafka connect's incremental rebalance algorithm

2021-04-05 Thread Thouqueer Ahmed
Hi,
  What would happen when new worker joins after the synchronization 
barrier ?

As per code -> performTaskAssignment function of IncrementalAssignor -> Boolean 
canRevoke is false when it is called during the 2nd rebalance. Hence revocation 
is skipped and only assignment is done.
This would lead to imbalance in #tasks/#connectors.

How is this case handled ?

Thanks,
Ahmed



Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #669

2021-04-05 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12614: Use Jenkinsfile for trunk and release branch builds 
(#10473)

[github] KAFKA-12539; Refactor KafkaRaftCllient handleVoteRequest to reduce 
cyclomatic complexity (#10393)


--
[...truncated 3.73 MB...]

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAclStoreTest > shouldRoundTripChangeNode() STARTED

LiteralAclStoreTest > shouldRoundTripChangeNode() PASSED

LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral() STARTED

LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral() PASSED

LiteralAclStoreTest > shouldWriteChangesToTheWritePath() STARTED

LiteralAclStoreTest > shouldWriteChangesToTheWritePath() PASSED

LiteralAclStoreTest > shouldHaveCorrectPatternType() STARTED

LiteralAclStoreTest > shouldHaveCorrectPatternType() PASSED

LiteralAclStoreTest > shouldDecodeResourceUsingTwoPartLogic() STARTED

LiteralAclStoreTest > shouldDecodeResourceUsingTwoPartLogic() PASSED

ReassignPartitionsZNodeTest > testDecodeInvalidJson() STARTED

ReassignPartitionsZNodeTest > testDecodeInvalidJson() PASSED

ReassignPartitio

[jira] [Resolved] (KAFKA-12548) Invalid record error message is not getting sent to application

2021-04-05 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-12548.
-
Resolution: Fixed

> Invalid record error message is not getting sent to application
> ---
>
> Key: KAFKA-12548
> URL: https://issues.apache.org/jira/browse/KAFKA-12548
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
> Fix For: 3.0.0
>
>
> The ProduceResponse includes a nice record error message when we return 
> INVALID_RECORD_ERROR. Sadly this is getting discarded by the producer, so the 
> user never gets a chance to see it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


contributor permission

2021-04-05 Thread Amuthan
Hi

I would like to contribute to
https://issues.apache.org/jira/browse/KAFKA-12559, Could you please give me
contributor permission, here is my jira id: *simplyamuthan*

Regards
Amuthan.


Build failed in Jenkins: Kafka » kafka-2.8-jdk8 #91

2021-04-05 Thread Apache Jenkins Server
See 


Changes:

[Ismael Juma] KAFKA-10759 Add ARM build stage (#9992)

[Ismael Juma] KAFKA-12614: Use Jenkinsfile for trunk and release branch builds 
(#10473)


--
[...truncated 3.63 MB...]
LinuxIoMetricsCollectorTest > testReadProcFile() STARTED

LinuxIoMetricsCollectorTest > testReadProcFile() PASSED

LinuxIoMetricsCollectorTest > testUnableToReadNonexistentProcFile() STARTED

LinuxIoMetricsCollectorTest > testUnableToReadNonexistentProcFile() PASSED

AssignmentStateTest > [1] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(), original=List(), isUnderReplicated=false 
STARTED

AssignmentStateTest > [1] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(), original=List(), isUnderReplicated=false 
PASSED

AssignmentStateTest > [2] isr=List(101, 102), replicas=List(101, 102, 103), 
adding=List(), removing=List(), original=List(), isUnderReplicated=true STARTED

AssignmentStateTest > [2] isr=List(101, 102), replicas=List(101, 102, 103), 
adding=List(), removing=List(), original=List(), isUnderReplicated=true PASSED

AssignmentStateTest > [3] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [3] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [4] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [4] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(104, 105), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [5] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [5] isr=List(101, 102, 103), replicas=List(101, 102, 
103), adding=List(), removing=List(102), original=List(101, 102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [6] isr=List(102, 103), replicas=List(102, 103), 
adding=List(101), removing=List(), original=List(102, 103), 
isUnderReplicated=false STARTED

AssignmentStateTest > [6] isr=List(102, 103), replicas=List(102, 103), 
adding=List(101), removing=List(), original=List(102, 103), 
isUnderReplicated=false PASSED

AssignmentStateTest > [7] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false STARTED

AssignmentStateTest > [7] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false PASSED

AssignmentStateTest > [8] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false STARTED

AssignmentStateTest > [8] isr=List(103, 104, 105), replicas=List(101, 102, 
103), adding=List(104, 105, 106), removing=List(), original=List(101, 102, 
103), isUnderReplicated=false PASSED

AssignmentStateTest > [9] isr=List(103, 104), replicas=List(101, 102, 103), 
adding=List(104, 105, 106), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=true STARTED

AssignmentStateTest > [9] isr=List(103, 104), replicas=List(101, 102, 103), 
adding=List(104, 105, 106), removing=List(), original=List(101, 102, 103), 
isUnderReplicated=true PASSED

PartitionTest > testMakeLeaderDoesNotUpdateEpochCacheForOldFormats() STARTED

PartitionTest > testMakeLeaderDoesNotUpdateEpochCacheForOldFormats() PASSED

PartitionTest > testIsrExpansion() STARTED

PartitionTest > testIsrExpansion() PASSED

PartitionTest > testReadRecordEpochValidationForLeader() STARTED

PartitionTest > testReadRecordEpochValidationForLeader() PASSED

PartitionTest > testAlterIsrUnknownTopic() STARTED

PartitionTest > testAlterIsrUnknownTopic() PASSED

PartitionTest > testIsrNotShrunkIfUpdateFails() STARTED

PartitionTest > testIsrNotShrunkIfUpdateFails() PASSED

PartitionTest > testFetchOffsetForTimestampEpochValidationForFollower() STARTED

PartitionTest > testFetchOffsetForTimestampEpochValidationForFollower() PASSED

PartitionTest > testIsrNotExpandedIfUpdateFails() STARTED

PartitionTest > testIsrNotExpandedIfUpdateFails() PASSED

PartitionTest > testLogConfigDirtyAsBrokerUpdated() STARTED

PartitionTest > testLogConfigDirtyAsBrokerUpdated() PASSED

PartitionTest > testAddAndRemoveMetrics() STARTED

PartitionTest > testAddAndRemoveMetrics() PASSED

PartitionTest > testListOffsetIsolationLevels() STARTED

PartitionTest > testListOffsetIsolationLevels(

[GitHub] [kafka-site] hachikuji merged pull request #344: Add Grab (https://www.grab.com/) to the list of the "Powered By ❤️"

2021-04-05 Thread GitBox


hachikuji merged pull request #344:
URL: https://github.com/apache/kafka-site/pull/344


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Build failed in Jenkins: Kafka » kafka-trunk-jdk11 #668

2021-04-05 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-12615: Fix `Selector.clear()` javadoc typo (#10477)


--
[...truncated 3.71 MB...]
KafkaZkClientTest > testUpdateBrokerInfo() STARTED

KafkaZkClientTest > testUpdateBrokerInfo() PASSED

KafkaZkClientTest > testCreateRecursive() STARTED

KafkaZkClientTest > testCreateRecursive() PASSED

KafkaZkClientTest > testGetConsumerOffsetNoData() STARTED

KafkaZkClientTest > testGetConsumerOffsetNoData() PASSED

KafkaZkClientTest > testDeleteTopicPathMethods() STARTED

KafkaZkClientTest > testDeleteTopicPathMethods() PASSED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() STARTED

KafkaZkClientTest > testSetTopicPartitionStatesRaw() PASSED

KafkaZkClientTest > testAclManagementMethods() STARTED

KafkaZkClientTest > testAclManagementMethods() PASSED

KafkaZkClientTest > testPreferredReplicaElectionMethods() STARTED

KafkaZkClientTest > testPreferredReplicaElectionMethods() PASSED

KafkaZkClientTest > testPropagateLogDir() STARTED

KafkaZkClientTest > testPropagateLogDir() PASSED

KafkaZkClientTest > testGetDataAndStat() STARTED

KafkaZkClientTest > testGetDataAndStat() PASSED

KafkaZkClientTest > testReassignPartitionsInProgress() STARTED

KafkaZkClientTest > testReassignPartitionsInProgress() PASSED

KafkaZkClientTest > testCreateTopLevelPaths() STARTED

KafkaZkClientTest > testCreateTopLevelPaths() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterDoesNotTriggerWatch() PASSED

KafkaZkClientTest > testIsrChangeNotificationGetters() STARTED

KafkaZkClientTest > testIsrChangeNotificationGetters() PASSED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() STARTED

KafkaZkClientTest > testLogDirEventNotificationsDeletion() PASSED

KafkaZkClientTest > testGetLogConfigs() STARTED

KafkaZkClientTest > testGetLogConfigs() PASSED

KafkaZkClientTest > testBrokerSequenceIdMethods() STARTED

KafkaZkClientTest > testBrokerSequenceIdMethods() PASSED

KafkaZkClientTest > testAclMethods() STARTED

KafkaZkClientTest > testAclMethods() PASSED

KafkaZkClientTest > testCreateSequentialPersistentPath() STARTED

KafkaZkClientTest > testCreateSequentialPersistentPath() PASSED

KafkaZkClientTest > testConditionalUpdatePath() STARTED

KafkaZkClientTest > testConditionalUpdatePath() PASSED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() STARTED

KafkaZkClientTest > testGetAllTopicsInClusterTriggersWatch() PASSED

KafkaZkClientTest > testDeleteTopicZNode() STARTED

KafkaZkClientTest > testDeleteTopicZNode() PASSED

KafkaZkClientTest > testDeletePath() STARTED

KafkaZkClientTest > testDeletePath() PASSED

KafkaZkClientTest > testGetBrokerMethods() STARTED

KafkaZkClientTest > testGetBrokerMethods() PASSED

KafkaZkClientTest > testCreateTokenChangeNotification() STARTED

KafkaZkClientTest > testCreateTokenChangeNotification() PASSED

KafkaZkClientTest > testGetTopicsAndPartitions() STARTED

KafkaZkClientTest > testGetTopicsAndPartitions() PASSED

KafkaZkClientTest > testRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRegisterBrokerInfo() PASSED

KafkaZkClientTest > testRetryRegisterBrokerInfo() STARTED

KafkaZkClientTest > testRetryRegisterBrokerInfo() PASSED

KafkaZkClientTest > testConsumerOffsetPath() STARTED

KafkaZkClientTest > testConsumerOffsetPath() PASSED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() STARTED

KafkaZkClientTest > testDeleteRecursiveWithControllerEpochVersionCheck() PASSED

KafkaZkClientTest > testTopicAssignments() STARTED

KafkaZkClientTest > testTopicAssignments() PASSED

KafkaZkClientTest > testControllerManagementMethods() STARTED

KafkaZkClientTest > testControllerManagementMethods() PASSED

KafkaZkClientTest > testTopicAssignmentMethods() STARTED

KafkaZkClientTest > testTopicAssignmentMethods() PASSED

KafkaZkClientTest > testConnectionViaNettyClient() STARTED

KafkaZkClientTest > testConnectionViaNettyClient() PASSED

KafkaZkClientTest > testPropagateIsrChanges() STARTED

KafkaZkClientTest > testPropagateIsrChanges() PASSED

KafkaZkClientTest > testControllerEpochMethods() STARTED

KafkaZkClientTest > testControllerEpochMethods() PASSED

KafkaZkClientTest > testDeleteRecursive() STARTED

KafkaZkClientTest > testDeleteRecursive() PASSED

KafkaZkClientTest > testGetTopicPartitionStates() STARTED

KafkaZkClientTest > testGetTopicPartitionStates() PASSED

KafkaZkClientTest > testCreateConfigChangeNotification() STARTED

KafkaZkClientTest > testCreateConfigChangeNotification() PASSED

KafkaZkClientTest > testDelegationTokenMethods() STARTED

KafkaZkClientTest > testDelegationTokenMethods() PASSED

LiteralAclStoreTest > shouldHaveCorrectPaths() STARTED

LiteralAclStoreTest > shouldHaveCorrectPaths() PASSED

LiteralAclStoreTest > shouldRoundTripChangeNode() STARTED

LiteralAclStoreTest > shouldRoundTripChangeNode() PAS

[jira] [Created] (KAFKA-12617) Convert MetadataRequestTest to use ClusterTest

2021-04-05 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12617:
---

 Summary: Convert MetadataRequestTest to use ClusterTest
 Key: KAFKA-12617
 URL: https://issues.apache.org/jira/browse/KAFKA-12617
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jason Gustafson
Assignee: Jason Gustafson






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-12616) Convert integration tests to use ClusterTest

2021-04-05 Thread Jason Gustafson (Jira)
Jason Gustafson created KAFKA-12616:
---

 Summary: Convert integration tests to use ClusterTest 
 Key: KAFKA-12616
 URL: https://issues.apache.org/jira/browse/KAFKA-12616
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


We would like to convert integration tests to use the new ClusterTest 
annotations so that we can easily test both the Zk and KRaft implementations. 
This will require adding a bunch of support to the ClusterTest framework as we 
go along.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12604) Remove envelope handling from broker

2021-04-05 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12604?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-12604.
-
Resolution: Won't Fix

> Remove envelope handling from broker
> 
>
> Key: KAFKA-12604
> URL: https://issues.apache.org/jira/browse/KAFKA-12604
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>  Labels: kip-500
>
> We only need the envelope request to be handled on the controller endpoint. 
> We added it to the broker initially in order to allow testing of the 
> forwarding logic. Now that the integration testing framework for kip-500 is 
> in place, we should be able to get rid of this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-12539) Move some logic in handleVoteRequest to EpochState

2021-04-05 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-12539.
-
Resolution: Fixed

> Move some logic in handleVoteRequest to EpochState
> --
>
> Key: KAFKA-12539
> URL: https://issues.apache.org/jira/browse/KAFKA-12539
> Project: Kafka
>  Issue Type: Improvement
>Reporter: dengziming
>Assignee: dengziming
>Priority: Minor
>
> Reduce the cyclomatic complexity of KafkaRaftClient, see the comment for 
> details: https://github.com/apache/kafka/pull/10289#discussion_r597274570



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Error Messages for Dell Boomi connector Kafka

2021-04-05 Thread Shree, Nikita
Hi Team,

I am trying to connect to Kafka via Dell Boomi. IT is throwing error messages.
Can I get the entire list of errors thrown by Kafka Connector to Dell Boomi 
platform.

Regards,
Nikita



This message is for the designated recipient only and may contain privileged, 
proprietary, or otherwise confidential information. If you have received it in 
error, please notify the sender immediately and delete the original. Any other 
use of the e-mail by you is prohibited. Where allowed by local law, electronic 
communications with Accenture and its affiliates, including e-mail and instant 
messaging (including content), may be scanned by our systems for the purposes 
of information security and assessment of internal compliance with Accenture 
policy. Your privacy is important to us. Accenture uses your personal data only 
in compliance with data protection laws. For further information on how 
Accenture processes your personal data, please see our privacy statement at 
https://www.accenture.com/us-en/privacy-policy.
__

www.accenture.com


[jira] [Created] (KAFKA-12615) Correct comments for the method Selector.clear

2021-04-05 Thread HaiyuanZhao (Jira)
HaiyuanZhao created KAFKA-12615:
---

 Summary: Correct comments for the method Selector.clear
 Key: KAFKA-12615
 URL: https://issues.apache.org/jira/browse/KAFKA-12615
 Project: Kafka
  Issue Type: Improvement
  Components: build
Reporter: HaiyuanZhao
Assignee: HaiyuanZhao


{code:java}

/**
 * Clears all the results from the previous poll. This is invoked by Selector 
at the start of
 * a poll() when all the results from the previous poll are expected to have 
been handled.
 * 
 * SocketServer uses {@link #clearCompletedSends()} and {@link 
#clearCompletedSends()} to
 * clear `completedSends` and `completedReceives` as soon as they are processed 
to avoid
 * holding onto large request/response buffers from multiple connections longer 
than necessary.
 * Clients rely on Selector invoking {@link #clear()} at the start of each 
poll() since memory usage
 * is less critical and clearing once-per-poll provides the flexibility to 
process these results in
 * any order before the next poll.
 */
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Please review 2.8.0 blog post

2021-04-05 Thread Adam Bellemare
Read it all. It looks good to me in terms of structure and content. I am
not sufficiently up to date on all the features that are otherwise included
in 2.8.0, but the ones listed seem very prominent!



On Thu, Apr 1, 2021 at 4:39 PM John Roesler  wrote:

> Hello all,
>
> In the steady march toward the Apache Kafka 2.8.0 release, I
> have prepared a draft of the release announcement post:
>
> https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache5
>
> If you have a moment, I would greatly appreciate your
> reviews.
>
> Thank you,
> -John
>
>


ACL source IP supports IP network segments

2021-04-05 Thread lobo xu
Dear all:
Our users need our support to allow the use of IP network segments when 
adding ACL policies and specifying the host parameter. The current code does 
not support this feature, and I want to initiate a discussion on whether the 
community needs this feature.

Best,
Lobo