Re: [DISCUSS] KIP-714: Client metrics and observability

2021-09-20 Thread Colin McCabe
On Mon, Sep 20, 2021, at 17:35, Feng Min wrote:
> Thanks Magnus & Colin for the discussion.
>
> Based on KIP-714's stateless design, Client can pretty much use any
> connection to any broker to send metrics. We are not associating connection
> with client metric state. Is my understanding correct? If yes,  how about
> the following two scenarios
>
> 1) One Client (Client-ID) registers two different client instance id via
> separate registration. Is it permitted? If OK, how to distinguish them from
> the case 2 below.
>

Hi Feng,

My understanding, which Magnus can clarify I guess, is that you could have 
something like two Producer instances running with the same client.id (perhaps 
because they're using the same config file, for example). They could even be in 
the same process. But they would get separate UUIDs.

I believe Magnus used the term client to mean "Producer or Consumer". So if you 
have both a Producer and a Consumer in your application I would expect you'd 
get separate UUIDs for both. Again Magnus can chime in here, I guess.

> 2) How about the client restarting? What's the expectation? Should the
> server expect the client to carry a persisted client instance id or should
> the client be treated as a new instance?

The KIP doesn't describe any mechanism for persistence, so I would assume that 
when you restart the client you get a new UUID. I agree that it would be good 
to spell this out.

> also some comments inline.
>
> On Mon, Sep 20, 2021 at 11:41 AM Colin McCabe  wrote:
>
> ...
>
>> It seems like the goal here is to have the client register itself, so that
>> we can tell if this is an old client reconnecting. If that is the case,
>> then I suggest to rename the RPC to RegisterClient.
>>
>> I think we need a better name than "clientInstanceId" since that name is
>> very similar to "clientId." Perhaps something like originId? Or clientUuid?
>> Let's also use UUID here rather than a string.
>>
>> > 6. > PushTelemetryRequest{
>> >ClientInstanceId = f00d-feed-deff-ceff--….,
>> >SubscriptionId = 0x234adf34,
>> >ContentType = OTLPv8|ZSTD,
>> >Terminating = False,
>> >Metrics = …// zstd-compressed OTLPv08-protobuf-serialized metrics
>> >   }
>>
>>
> If we assume connection is not bound to ClientInstanceId, and the RPC can
> be sent to any broker (not necessarily the broker doing the registration).
> The client-instance-id is required for every metric reporting. It's just
> part of the labelling.
>

Hmm, I don't quite follow. I suggested using a name that was less confusingly 
similar to "client ID". Your response states that "the client-instance-id is 
required for every metric reporting... it's just part of the labelling". I 
don't see how the UUID being required for metric reporting is related to what 
its name should be. Did you mean to reply to a different point here?

best,
Colin


>
>> It's not necessary for the client to re-send its client instance ID here,
>> since it already registered with RegisterClient. If the TCP connection
>> dropped, it will have to re-send RegisterClient anyway. SubscriptionID we
>> should get rid of, as I said above.
>>
>> I don't see the need for protobufs. Why not just use Kafka's own
>> serialization mechanism? As much as possible, we should try to avoid
>> creating "turduckens" of protocol X containing a buffer serialized with
>> protocol Y, containing a protocol serialized with protocol Z. These aren't
>> conducive to a good implementation, and make it harder for people to write
>> clients. Just use Kafka's RPC protocol (with optional fields if you wish).
>>
>> If we do compression on Kafka RPC, I would prefer that we do it a more
>> generic way that applies to all control messages, not just this one. I also
>> doubt we need to support lots and lots of different compression codecs, at
>> first at least.
>>
>> Another thing I'd like to understand is whether we truly need
>> "terminating" (or anything like it). I'm still confused about how the
>> backend could use this. Keep in mind that we may receive it on multiple
>> brokers (or not receive it at all). We may receive more stuff about client
>> XYZ from broker 1 after we have already received a "terminated" for client
>> XYZ from broker 2.
>>
>> > If the broker connection goes down or the connection is to be used for
>> > other purposes (e.g., blocking FetchRequests), the client will send
>> > PushTelemetryRequests to any other broker in the cluster, using the same
>> > ClientInstanceId and SubscriptionId as received in the latest
>> > GetTelemetrySubscriptionsResponse.
>> >
>> > While the subscriptionId may change during the lifetime of the client
>> > instance (when metric subscriptions are updated), the ClientInstanceId is
>> > only acquired once and must not change (as it is used to identify the
>> > unique client instance).
>> > ...
>> > What we do want though is ability to single out a specific client
>> instance
>> > to give it a more fine-grained 

Re: [DISCUSS] KIP-714: Client metrics and observability

2021-09-20 Thread Colin McCabe
On Mon, Sep 20, 2021, at 12:30, Feng Min wrote:
> Some comments about subscriptionId.
>
> ApiVersion is not a good example. API Version here is actually acting like
> an identifier as the client will carry this information. Forcing to
> disconnect a connection from the server side is quite heavy. IMHO, the
> behavior is kind of part of the protocol. Adding subscriptionId is
> relatively simple and straightforward.
>

Hi Feng,

Sorry, I'm not sure what you mean by "API Version here is actually acting like 
an identifier." APIVersions is not an identifier. Each client gets the same 
ApiVersionsResponse from the broker. In most clusters, each broker will return 
the same set of ApiVersionsResponse as well. So you can not use 
ApiVersionsResponse as an identifier of anything, as far as I can see.

Dropping a connection is not that "heavy" considering that it only has to 
happen when we change the metrics subscription, which should be a very rare 
event, if I understand the proposal correctly.

best,
Colin


>
>
>> Hmm, SubscriptionId seems rather complex. We don't have this kind of
>> complicated machinery for changing ApiVersions, and that is something that
>> can also change over time, and which affects the clients.
>>
>> Changing the configured metrics should be extremely rare. In this case,
>> why don't we just close all connections on the broker side? Then the
>> clients can re-connect and re-fetch the information about the metrics
>> they're supposed to send.
>>
>> >
>> > Something like this:
>> >
>> > // Get the configured metrics subscription.
>> > GetTelemetrySubscriptionsRequest {
>> >StrNull  ClientInstanceId  // Null on first invocation to retrieve a
>> > newly generated instance id from the broker.
>> > }
>>
>> It seems like the goal here is to have the client register itself, so that
>> we can tell if this is an old client reconnecting. If that is the case,
>> then I suggest to rename the RPC to RegisterClient.
>>
>> I think we need a better name than "clientInstanceId" since that name is
>> very similar to "clientId." Perhaps something like originId? Or clientUuid?
>> Let's also use UUID here rather than a string.
>>
>> > 6. > PushTelemetryRequest{
>> >ClientInstanceId = f00d-feed-deff-ceff--….,
>> >SubscriptionId = 0x234adf34,
>> >ContentType = OTLPv8|ZSTD,
>> >Terminating = False,
>> >Metrics = …// zstd-compressed OTLPv08-protobuf-serialized metrics
>> >   }
>>
>> It's not necessary for the client to re-send its client instance ID here,
>> since it already registered with RegisterClient. If the TCP connection
>> dropped, it will have to re-send RegisterClient anyway. SubscriptionID we
>> should get rid of, as I said above.
>>
>> I don't see the need for protobufs. Why not just use Kafka's own
>> serialization mechanism? As much as possible, we should try to avoid
>> creating "turduckens" of protocol X containing a buffer serialized with
>> protocol Y, containing a protocol serialized with protocol Z. These aren't
>> conducive to a good implementation, and make it harder for people to write
>> clients. Just use Kafka's RPC protocol (with optional fields if you wish).
>>
>> If we do compression on Kafka RPC, I would prefer that we do it a more
>> generic way that applies to all control messages, not just this one. I also
>> doubt we need to support lots and lots of different compression codecs, at
>> first at least.
>>
>> Another thing I'd like to understand is whether we truly need
>> "terminating" (or anything like it). I'm still confused about how the
>> backend could use this. Keep in mind that we may receive it on multiple
>> brokers (or not receive it at all). We may receive more stuff about client
>> XYZ from broker 1 after we have already received a "terminated" for client
>> XYZ from broker 2.
>>
>> > If the broker connection goes down or the connection is to be used for
>> > other purposes (e.g., blocking FetchRequests), the client will send
>> > PushTelemetryRequests to any other broker in the cluster, using the same
>> > ClientInstanceId and SubscriptionId as received in the latest
>> > GetTelemetrySubscriptionsResponse.
>> >
>> > While the subscriptionId may change during the lifetime of the client
>> > instance (when metric subscriptions are updated), the ClientInstanceId is
>> > only acquired once and must not change (as it is used to identify the
>> > unique client instance).
>> > ...
>> > What we do want though is ability to single out a specific client
>> instance
>> > to give it a more fine-grained subscription for troubleshooting, and
>> > we can do that with the current proposal with matching solely on the
>> > CLIENT_INSTANCE_ID.
>> > In other words; all clients will have the same standard metrics
>> > subscription, but specific client instances can have alternate
>> > subscriptions.
>>
>> That makes sense, and gives a good reason why we might want to couple
>> finding the metrics info to passing the client UUID.
>>
>> > The 

Re: [DISCUSS] KIP-775: Custom partitioners in foreign key joins

2021-09-20 Thread Victoria Xia
Hi Matthias,

Thanks for having a look at the KIP! I've updated it with your suggestion
to introduce a new `TableJoined` object with partitioners of type
`StreamPartitioner` and `StreamPartitioner`, and to
deprecate the existing FK join methods which accept a `Named` object
accordingly. I agree it makes sense to keep the number of join interfaces
smaller.

Thanks,
Victoria

On Sat, Sep 18, 2021 at 11:07 AM Matthias J. Sax  wrote:

> Thanks for the KIP Victoria.
>
> As pointed out on the Jira ticket by you, using `` and `` as
> partitioner types does not really work, because we don't have access to
> the right value on the left side nor have we access to the left value on
> the right hand side. -- I like your idea to use `Void` as value types to
> make it clear to the users that partitioning must be done on the key only.
>
> For the proposed public API change, I would propose not to pass the
> partitioners directly, but to introduce a config object (similar to
> `Joined` for stream-table joins, and `StreamJoined` for stream-stream
> joins). This new object could also implement `NamedOperation` and thus
> replace `Named`. To this end, we would deprecate the existing methods
> using `Named` and replace them with the new methods. Net benefit is,
> that we don't get more overloads (after we removed the deprecated ones).
>
> Not sure how we want to call the new object. Maybe `TableJoined` in
> alignment to `StreamJoined`?
>
>
> -Matthias
>
> On 9/15/21 3:36 PM, Victoria Xia wrote:
> > Hi,
> >
> > I've opened a small KIP for adding Kafka Streams support for foreign key
> > joins on tables with custom partitioners:
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-775%3A+Custom+partitioners+in+foreign+key+joins
> >
> > Feedback appreciated. Thanks!
> >
> > - Victoria
> >
>


Re: [DISCUSS] KIP-714: Client metrics and observability

2021-09-20 Thread Feng Min
Thanks Magnus & Colin for the discussion.

Based on KIP-714's stateless design, Client can pretty much use any
connection to any broker to send metrics. We are not associating connection
with client metric state. Is my understanding correct? If yes,  how about
the following two scenarios

1) One Client (Client-ID) registers two different client instance id via
separate registration. Is it permitted? If OK, how to distinguish them from
the case 2 below.

2) How about the client restarting? What's the expectation? Should the
server expect the client to carry a persisted client instance id or should
the client be treated as a new instance?

also some comments inline.

On Mon, Sep 20, 2021 at 11:41 AM Colin McCabe  wrote:

> On Tue, Sep 14, 2021, at 00:47, Magnus Edenhill wrote:
> > Thanks for your feedback Colin, see my updated proposal below.
> > ...
>
> Hi Magnus,
>
> Thanks for the update.
>
> >
> > Splitting up the API into separate data and control requests makes sense.
> > With a split we would have one API for querying the broker for configured
> > metrics subscriptions,
> > and one API for pushing the collected metrics to the broker.
> >
> > A mechanism is still needed to notify the client when the subscription is
> > changed;
> > I’ve added a SubscriptionId for this purpose (which could be a checksum
> of
> > the configured metrics subscription), this id is sent to the client along
> > with the metrics subscription, and the client sends it back to the broker
> > when pushing metrics. If the broker finds the pushed subscription id to
> > differ from what is expected it will return an error to the client, which
> > triggers the client to retrieve the new subscribed metrics and an updated
> > subscription id. The generation of the subscriptionId is opaque to the
> > client.
> >
>
> Hmm, SubscriptionId seems rather complex. We don't have this kind of
> complicated machinery for changing ApiVersions, and that is something that
> can also change over time, and which affects the clients.
>
> Changing the configured metrics should be extremely rare. In this case,
> why don't we just close all connections on the broker side? Then the
> clients can re-connect and re-fetch the information about the metrics
> they're supposed to send.
>
> >
> > Something like this:
> >
> > // Get the configured metrics subscription.
> > GetTelemetrySubscriptionsRequest {
> >StrNull  ClientInstanceId  // Null on first invocation to retrieve a
> > newly generated instance id from the broker.
> > }
>
> +1 on RegisterClient or RegisterMetricClient


> It seems like the goal here is to have the client register itself, so that
> we can tell if this is an old client reconnecting. If that is the case,
> then I suggest to rename the RPC to RegisterClient.
>
> I think we need a better name than "clientInstanceId" since that name is
> very similar to "clientId." Perhaps something like originId? Or clientUuid?
> Let's also use UUID here rather than a string.
>
> > 6. > PushTelemetryRequest{
> >ClientInstanceId = f00d-feed-deff-ceff--….,
> >SubscriptionId = 0x234adf34,
> >ContentType = OTLPv8|ZSTD,
> >Terminating = False,
> >Metrics = …// zstd-compressed OTLPv08-protobuf-serialized metrics
> >   }
>
>
If we assume connection is not bound to ClientInstanceId, and the RPC can
be sent to any broker (not necessarily the broker doing the registration).
The client-instance-id is required for every metric reporting. It's just
part of the labelling.


> It's not necessary for the client to re-send its client instance ID here,
> since it already registered with RegisterClient. If the TCP connection
> dropped, it will have to re-send RegisterClient anyway. SubscriptionID we
> should get rid of, as I said above.
>
> I don't see the need for protobufs. Why not just use Kafka's own
> serialization mechanism? As much as possible, we should try to avoid
> creating "turduckens" of protocol X containing a buffer serialized with
> protocol Y, containing a protocol serialized with protocol Z. These aren't
> conducive to a good implementation, and make it harder for people to write
> clients. Just use Kafka's RPC protocol (with optional fields if you wish).
>
> If we do compression on Kafka RPC, I would prefer that we do it a more
> generic way that applies to all control messages, not just this one. I also
> doubt we need to support lots and lots of different compression codecs, at
> first at least.
>
> Another thing I'd like to understand is whether we truly need
> "terminating" (or anything like it). I'm still confused about how the
> backend could use this. Keep in mind that we may receive it on multiple
> brokers (or not receive it at all). We may receive more stuff about client
> XYZ from broker 1 after we have already received a "terminated" for client
> XYZ from broker 2.
>
> > If the broker connection goes down or the connection is to be used for
> > other purposes (e.g., blocking FetchRequests), the 

Re: [DISCUSS] KIP-770: Replace "buffered.records.per.partition" with "input.buffer.max.bytes"

2021-09-20 Thread Guozhang Wang
Hi Sagar,

Thanks for the added metrics, about its name, if it is proposed as a
task-level config, then we do not need to prefix its name as `task-`. But
on the other hand, it's better to give the full description of the metrics,
like its type name / tag maps / recording levels etc, an example is here:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-607%3A+Add+Metrics+to+Kafka+Streams+to+Report+Properties+of+RocksDB

Guozhang

On Mon, Sep 20, 2021 at 10:04 AM Sagar  wrote:

> Hi All,
>
> Bumping this thread again.
>
> Thanks!
> Sagar.
>
> On Sat, Sep 11, 2021 at 2:04 PM Sagar  wrote:
>
> > Hi Mathias,
> >
> > I missed out on the metrics part.
> >
> > I have added the new metric in the proposed changes section along with
> the
> > small re-wording that you talked about.
> >
> > Let me know if that makes sense.
> >
> > Thanks!
> > Sagar.
> >
> > On Fri, Sep 10, 2021 at 3:45 AM Matthias J. Sax 
> wrote:
> >
> >> Thanks for the KIP.
> >>
> >> There was some discussion about adding a metric on the thread, but the
> >> KIP does not contain anything about it. Did we drop this suggestion or
> >> was the KIP not updated accordingly?
> >>
> >>
> >> Nit:
> >>
> >> > This would be a global config applicable per processing topology
> >>
> >> Can we change this to `per Kafka Streams instance.`
> >>
> >> Atm, a Stream instance executes a single topology, so it does not make
> >> any effective difference right now. However, it seems better (more
> >> logical) to bind the config to the instance (not the topology the
> >> instance executes).
> >>
> >>
> >> -Matthias
> >>
> >> On 9/2/21 6:08 AM, Sagar wrote:
> >> > Thanks Guozhang and Luke.
> >> >
> >> > I have updated the KIP with all the suggested changes.
> >> >
> >> > Do you think we could start voting for this?
> >> >
> >> > Thanks!
> >> > Sagar.
> >> >
> >> > On Thu, Sep 2, 2021 at 8:26 AM Luke Chen  wrote:
> >> >
> >> >> Thanks for the KIP. Overall LGTM.
> >> >>
> >> >> Just one thought, if we "rename" the config directly as mentioned in
> >> the
> >> >> KIP, would that break existing applications?
> >> >> Should we deprecate the old one first, and make the old/new names
> >> co-exist
> >> >> for some period of time?
> >> >>
> >> >> Public Interfaces
> >> >>
> >> >>- Adding a new config *input.buffer.max.bytes *applicable at a
> >> topology
> >> >>level. The importance of this config would be *Medium*.
> >> >>- Renaming *cache.max.bytes.buffering* to
> >> *statestore.cache.max.bytes*.
> >> >>
> >> >>
> >> >>
> >> >> Thank you.
> >> >> Luke
> >> >>
> >> >> On Thu, Sep 2, 2021 at 1:50 AM Guozhang Wang 
> >> wrote:
> >> >>
> >> >>> Currently the state store cache size default value is 10MB today,
> >> which
> >> >>> arguably is rather small. So I'm thinking maybe for this config
> >> default
> >> >> to
> >> >>> 512MB.
> >> >>>
> >> >>> Other than that, LGTM.
> >> >>>
> >> >>> On Sat, Aug 28, 2021 at 11:34 AM Sagar 
> >> >> wrote:
> >> >>>
> >>  Thanks Guozhang and Sophie.
> >> 
> >>  Yeah a small default value would lower the throughput. I didn't
> quite
> >>  realise it earlier. It's slightly hard to predict this value so I
> >> would
> >>  guess around 1/2 GB to 1 GB? WDYT?
> >> 
> >>  Regarding the renaming of the config and the new metric, sure would
> >> >>> include
> >>  it in the KIP.
> >> 
> >>  Lastly, importance would also. be added. I guess Medium should be
> ok.
> >> 
> >>  Thanks!
> >>  Sagar.
> >> 
> >> 
> >>  On Sat, Aug 28, 2021 at 10:42 AM Sophie Blee-Goldman
> >>   wrote:
> >> 
> >> > 1) I agree that we should just distribute the bytes evenly, at
> least
> >> >>> for
> >> > now. It's simpler to understand and
> >> > we can always change it later, plus it makes sense to keep this
> >> >> aligned
> >> > with how the cache works today
> >> >
> >> > 2) +1 to being conservative in the generous sense, it's just not
> >>  something
> >> > we can predict with any degree
> >> > of accuracy and even if we could, the appropriate value is going
> to
> >>  differ
> >> > wildly across applications and use
> >> > cases. We might want to just pick some multiple of the default
> cache
> >>  size,
> >> > and maybe do some research on
> >> > other relevant defaults or sizes (default JVM heap, size of
> >> available
> >> > memory in common hosts eg EC2
> >> > instances, etc). We don't need to worry as much about erring on
> the
> >> >>> side
> >>  of
> >> > too big, since other configs like
> >> > the max.poll.records will help somewhat to keep it from exploding.
> >> >
> >> > 4) 100%, I always found the *cache.max.bytes.buffering* config
> name
> >> >> to
> >> >>> be
> >> > incredibly confusing. Deprecating this in
> >> > favor of "*statestore.cache.max.bytes*" and aligning it to the new
> >> >>> input
> >> > buffer config sounds good to me to include here.
> >> >
> >> > 5) 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #484

2021-09-20 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 492733 lines...]
[2021-09-20T21:53:01.176Z] > Task :connect:json:testJar
[2021-09-20T21:53:01.176Z] > Task :connect:json:testSrcJar
[2021-09-20T21:53:01.176Z] > Task :metadata:compileTestJava UP-TO-DATE
[2021-09-20T21:53:01.176Z] > Task :metadata:testClasses UP-TO-DATE
[2021-09-20T21:53:02.125Z] > Task 
:clients:generateMetadataFileForMavenJavaPublication
[2021-09-20T21:53:02.125Z] > Task 
:clients:generatePomFileForMavenJavaPublication
[2021-09-20T21:53:02.125Z] > Task :streams:copyDependantLibs
[2021-09-20T21:53:02.125Z] > Task :streams:processTestResources UP-TO-DATE
[2021-09-20T21:53:02.125Z] 
[2021-09-20T21:53:02.125Z] > Task :streams:processMessages
[2021-09-20T21:53:02.125Z] Execution optimizations have been disabled for task 
':streams:processMessages' to ensure correctness due to the following reasons:
[2021-09-20T21:53:02.125Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/workspace/Kafka_kafka_trunk/streams/src/generated/java/org/apache/kafka/streams/internals/generated'.
 Reason: Task ':streams:srcJar' uses this output of task 
':streams:processMessages' without declaring an explicit or implicit 
dependency. This can lead to incorrect results being produced, depending on 
what order the tasks are executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-09-20T21:53:03.072Z] MessageGenerator: processed 1 Kafka message JSON 
files(s).
[2021-09-20T21:53:03.072Z] 
[2021-09-20T21:53:03.072Z] > Task :streams:compileJava UP-TO-DATE
[2021-09-20T21:53:03.072Z] > Task :streams:classes UP-TO-DATE
[2021-09-20T21:53:03.072Z] > Task :streams:jar UP-TO-DATE
[2021-09-20T21:53:03.072Z] > Task :streams:test-utils:compileJava UP-TO-DATE
[2021-09-20T21:53:03.072Z] > Task 
:streams:generateMetadataFileForMavenJavaPublication
[2021-09-20T21:53:06.686Z] > Task :connect:api:javadoc
[2021-09-20T21:53:06.686Z] > Task :connect:api:copyDependantLibs UP-TO-DATE
[2021-09-20T21:53:06.686Z] > Task :connect:api:jar UP-TO-DATE
[2021-09-20T21:53:06.686Z] > Task 
:connect:api:generateMetadataFileForMavenJavaPublication
[2021-09-20T21:53:06.686Z] > Task :connect:json:copyDependantLibs UP-TO-DATE
[2021-09-20T21:53:06.686Z] > Task :connect:json:jar UP-TO-DATE
[2021-09-20T21:53:06.686Z] > Task 
:connect:json:generateMetadataFileForMavenJavaPublication
[2021-09-20T21:53:06.686Z] > Task :connect:api:javadocJar
[2021-09-20T21:53:06.686Z] > Task :connect:api:compileTestJava UP-TO-DATE
[2021-09-20T21:53:06.686Z] > Task :connect:api:testClasses UP-TO-DATE
[2021-09-20T21:53:06.686Z] > Task 
:connect:json:publishMavenJavaPublicationToMavenLocal
[2021-09-20T21:53:06.686Z] > Task :connect:json:publishToMavenLocal
[2021-09-20T21:53:06.686Z] > Task :connect:api:testJar
[2021-09-20T21:53:06.686Z] > Task :connect:api:testSrcJar
[2021-09-20T21:53:06.686Z] > Task 
:connect:api:publishMavenJavaPublicationToMavenLocal
[2021-09-20T21:53:06.686Z] > Task :connect:api:publishToMavenLocal
[2021-09-20T21:53:11.350Z] > Task :streams:javadoc
[2021-09-20T21:53:12.299Z] > Task :streams:javadocJar
[2021-09-20T21:53:13.248Z] > Task :clients:javadoc
[2021-09-20T21:53:14.197Z] > Task :clients:javadocJar
[2021-09-20T21:53:15.146Z] 
[2021-09-20T21:53:15.146Z] > Task :clients:srcJar
[2021-09-20T21:53:15.146Z] Execution optimizations have been disabled for task 
':clients:srcJar' to ensure correctness due to the following reasons:
[2021-09-20T21:53:15.146Z]   - Gradle detected a problem with the following 
location: 
'/home/jenkins/workspace/Kafka_kafka_trunk/clients/src/generated/java'. Reason: 
Task ':clients:srcJar' uses this output of task ':clients:processMessages' 
without declaring an explicit or implicit dependency. This can lead to 
incorrect results being produced, depending on what order the tasks are 
executed. Please refer to 
https://docs.gradle.org/7.2/userguide/validation_problems.html#implicit_dependency
 for more details about this problem.
[2021-09-20T21:53:16.096Z] 
[2021-09-20T21:53:16.096Z] > Task :clients:testJar
[2021-09-20T21:53:17.047Z] > Task :clients:testSrcJar
[2021-09-20T21:53:17.047Z] > Task 
:clients:publishMavenJavaPublicationToMavenLocal
[2021-09-20T21:53:17.047Z] > Task :clients:publishToMavenLocal
[2021-09-20T21:53:31.144Z] > Task :core:compileScala
[2021-09-20T21:55:35.648Z] > Task :core:classes
[2021-09-20T21:55:35.648Z] > Task :core:compileTestJava NO-SOURCE
[2021-09-20T21:55:54.845Z] > Task :core:compileTestScala
[2021-09-20T21:57:43.802Z] > Task :core:testClasses
[2021-09-20T21:57:58.101Z] > Task :streams:compileTestJava
[2021-09-20T21:57:58.101Z] > Task :streams:testClasses
[2021-09-20T21:57:58.101Z] > Task :streams:testJar
[2021-09-20T21:57:58.101Z] > Task :streams:testSrcJar
[2021-09-20T21:57:58.101Z] > Task 

Jenkins build is unstable: Kafka » Kafka Branch Builder » 3.0 #136

2021-09-20 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-714: Client metrics and observability

2021-09-20 Thread Feng Min
Some comments about subscriptionId.

On Mon, Sep 20, 2021 at 11:41 AM Colin McCabe  wrote:

> On Tue, Sep 14, 2021, at 00:47, Magnus Edenhill wrote:
> > Thanks for your feedback Colin, see my updated proposal below.
> > ...
>
> Hi Magnus,
>
> Thanks for the update.
>
> >
> > Splitting up the API into separate data and control requests makes sense.
> > With a split we would have one API for querying the broker for configured
> > metrics subscriptions,
> > and one API for pushing the collected metrics to the broker.
> >
> > A mechanism is still needed to notify the client when the subscription is
> > changed;
> > I’ve added a SubscriptionId for this purpose (which could be a checksum
> of
> > the configured metrics subscription), this id is sent to the client along
> > with the metrics subscription, and the client sends it back to the broker
> > when pushing metrics. If the broker finds the pushed subscription id to
> > differ from what is expected it will return an error to the client, which
> > triggers the client to retrieve the new subscribed metrics and an updated
> > subscription id. The generation of the subscriptionId is opaque to the
> > client.
> >
>
>
ApiVersion is not a good example. API Version here is actually acting like
an identifier as the client will carry this information. Forcing to
disconnect a connection from the server side is quite heavy. IMHO, the
behavior is kind of part of the protocol. Adding subscriptionId is
relatively simple and straightforward.



> Hmm, SubscriptionId seems rather complex. We don't have this kind of
> complicated machinery for changing ApiVersions, and that is something that
> can also change over time, and which affects the clients.
>
> Changing the configured metrics should be extremely rare. In this case,
> why don't we just close all connections on the broker side? Then the
> clients can re-connect and re-fetch the information about the metrics
> they're supposed to send.
>
> >
> > Something like this:
> >
> > // Get the configured metrics subscription.
> > GetTelemetrySubscriptionsRequest {
> >StrNull  ClientInstanceId  // Null on first invocation to retrieve a
> > newly generated instance id from the broker.
> > }
>
> It seems like the goal here is to have the client register itself, so that
> we can tell if this is an old client reconnecting. If that is the case,
> then I suggest to rename the RPC to RegisterClient.
>
> I think we need a better name than "clientInstanceId" since that name is
> very similar to "clientId." Perhaps something like originId? Or clientUuid?
> Let's also use UUID here rather than a string.
>
> > 6. > PushTelemetryRequest{
> >ClientInstanceId = f00d-feed-deff-ceff--….,
> >SubscriptionId = 0x234adf34,
> >ContentType = OTLPv8|ZSTD,
> >Terminating = False,
> >Metrics = …// zstd-compressed OTLPv08-protobuf-serialized metrics
> >   }
>
> It's not necessary for the client to re-send its client instance ID here,
> since it already registered with RegisterClient. If the TCP connection
> dropped, it will have to re-send RegisterClient anyway. SubscriptionID we
> should get rid of, as I said above.
>
> I don't see the need for protobufs. Why not just use Kafka's own
> serialization mechanism? As much as possible, we should try to avoid
> creating "turduckens" of protocol X containing a buffer serialized with
> protocol Y, containing a protocol serialized with protocol Z. These aren't
> conducive to a good implementation, and make it harder for people to write
> clients. Just use Kafka's RPC protocol (with optional fields if you wish).
>
> If we do compression on Kafka RPC, I would prefer that we do it a more
> generic way that applies to all control messages, not just this one. I also
> doubt we need to support lots and lots of different compression codecs, at
> first at least.
>
> Another thing I'd like to understand is whether we truly need
> "terminating" (or anything like it). I'm still confused about how the
> backend could use this. Keep in mind that we may receive it on multiple
> brokers (or not receive it at all). We may receive more stuff about client
> XYZ from broker 1 after we have already received a "terminated" for client
> XYZ from broker 2.
>
> > If the broker connection goes down or the connection is to be used for
> > other purposes (e.g., blocking FetchRequests), the client will send
> > PushTelemetryRequests to any other broker in the cluster, using the same
> > ClientInstanceId and SubscriptionId as received in the latest
> > GetTelemetrySubscriptionsResponse.
> >
> > While the subscriptionId may change during the lifetime of the client
> > instance (when metric subscriptions are updated), the ClientInstanceId is
> > only acquired once and must not change (as it is used to identify the
> > unique client instance).
> > ...
> > What we do want though is ability to single out a specific client
> instance
> > to give it a more fine-grained subscription for 

Re: [DISCUSS] KIP-714: Client metrics and observability

2021-09-20 Thread Colin McCabe
On Tue, Sep 14, 2021, at 00:47, Magnus Edenhill wrote:
> Thanks for your feedback Colin, see my updated proposal below.
> ...

Hi Magnus,

Thanks for the update.

> 
> Splitting up the API into separate data and control requests makes sense.
> With a split we would have one API for querying the broker for configured
> metrics subscriptions,
> and one API for pushing the collected metrics to the broker.
> 
> A mechanism is still needed to notify the client when the subscription is
> changed;
> I’ve added a SubscriptionId for this purpose (which could be a checksum of
> the configured metrics subscription), this id is sent to the client along
> with the metrics subscription, and the client sends it back to the broker
> when pushing metrics. If the broker finds the pushed subscription id to
> differ from what is expected it will return an error to the client, which
> triggers the client to retrieve the new subscribed metrics and an updated
> subscription id. The generation of the subscriptionId is opaque to the
> client.
>

Hmm, SubscriptionId seems rather complex. We don't have this kind of 
complicated machinery for changing ApiVersions, and that is something that can 
also change over time, and which affects the clients.

Changing the configured metrics should be extremely rare. In this case, why 
don't we just close all connections on the broker side? Then the clients can 
re-connect and re-fetch the information about the metrics they're supposed to 
send.

> 
> Something like this:
> 
> // Get the configured metrics subscription.
> GetTelemetrySubscriptionsRequest {
>StrNull  ClientInstanceId  // Null on first invocation to retrieve a
> newly generated instance id from the broker.
> }

It seems like the goal here is to have the client register itself, so that we 
can tell if this is an old client reconnecting. If that is the case, then I 
suggest to rename the RPC to RegisterClient.

I think we need a better name than "clientInstanceId" since that name is very 
similar to "clientId." Perhaps something like originId? Or clientUuid? Let's 
also use UUID here rather than a string.

> 6. > PushTelemetryRequest{
>ClientInstanceId = f00d-feed-deff-ceff--….,
>SubscriptionId = 0x234adf34,
>ContentType = OTLPv8|ZSTD,
>Terminating = False,
>Metrics = …// zstd-compressed OTLPv08-protobuf-serialized metrics
>   }

It's not necessary for the client to re-send its client instance ID here, since 
it already registered with RegisterClient. If the TCP connection dropped, it 
will have to re-send RegisterClient anyway. SubscriptionID we should get rid 
of, as I said above.

I don't see the need for protobufs. Why not just use Kafka's own serialization 
mechanism? As much as possible, we should try to avoid creating "turduckens" of 
protocol X containing a buffer serialized with protocol Y, containing a 
protocol serialized with protocol Z. These aren't conducive to a good 
implementation, and make it harder for people to write clients. Just use 
Kafka's RPC protocol (with optional fields if you wish).

If we do compression on Kafka RPC, I would prefer that we do it a more generic 
way that applies to all control messages, not just this one. I also doubt we 
need to support lots and lots of different compression codecs, at first at 
least.

Another thing I'd like to understand is whether we truly need "terminating" (or 
anything like it). I'm still confused about how the backend could use this. 
Keep in mind that we may receive it on multiple brokers (or not receive it at 
all). We may receive more stuff about client XYZ from broker 1 after we have 
already received a "terminated" for client XYZ from broker 2.

> If the broker connection goes down or the connection is to be used for
> other purposes (e.g., blocking FetchRequests), the client will send
> PushTelemetryRequests to any other broker in the cluster, using the same
> ClientInstanceId and SubscriptionId as received in the latest
> GetTelemetrySubscriptionsResponse.
> 
> While the subscriptionId may change during the lifetime of the client
> instance (when metric subscriptions are updated), the ClientInstanceId is
> only acquired once and must not change (as it is used to identify the
> unique client instance).
> ...
> What we do want though is ability to single out a specific client instance
> to give it a more fine-grained subscription for troubleshooting, and
> we can do that with the current proposal with matching solely on the
> CLIENT_INSTANCE_ID.
> In other words; all clients will have the same standard metrics
> subscription, but specific client instances can have alternate
> subscriptions.

That makes sense, and gives a good reason why we might want to couple finding 
the metrics info to passing the client UUID.

> The metrics collector/tsdb/whatever will need to identify a single client
> instance, regardless of which broker received the metrics.
> The chapter on CLIENT_INSTANCE_ID motivates why we need a 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #483

2021-09-20 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 4430 lines...]
[2021-09-20T17:54:38.822Z] You can use '--warning-mode all' to show the 
individual deprecation warnings and determine if they come from your own 
scripts or plugins.
[2021-09-20T17:54:38.822Z] 
[2021-09-20T17:54:38.822Z] See 
https://docs.gradle.org/7.2/userguide/command_line_interface.html#sec:command_line_warnings
[2021-09-20T17:54:38.822Z] 
[2021-09-20T17:54:38.822Z] Execution optimizations have been disabled for 2 
invalid unit(s) of work during this build to ensure correctness.
[2021-09-20T17:54:38.822Z] Please consult deprecation warnings for more details.
[2021-09-20T17:54:38.822Z] 
[2021-09-20T17:54:38.822Z] BUILD FAILED in 4m 56s
[2021-09-20T17:54:38.822Z] 215 actionable tasks: 173 executed, 42 up-to-date
[2021-09-20T17:54:38.822Z] 
[2021-09-20T17:54:38.822Z] See the profiling report at: 
file:///home/jenkins/workspace/Kafka_kafka_trunk/build/reports/profile/profile-2021-09-20-17-49-47.html
[2021-09-20T17:54:38.822Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 8 and Scala 2.12
[2021-09-20T17:54:39.977Z] > Task :core:spotbugsMain
[2021-09-20T17:54:49.641Z] > Task :core:spotbugsMain
[2021-09-20T17:54:49.765Z] 
[2021-09-20T17:54:49.765Z] FAILURE: Build failed with an exception.
[2021-09-20T17:54:49.765Z] 
[2021-09-20T17:54:49.765Z] * What went wrong:
[2021-09-20T17:54:49.765Z] Execution failed for task ':core:compileTestScala'.
[2021-09-20T17:54:49.765Z] > Compilation failed
[2021-09-20T17:54:49.765Z] 
[2021-09-20T17:54:49.765Z] * Try:
[2021-09-20T17:54:49.765Z] Run with --stacktrace option to get the stack trace. 
Run with --info or --debug option to get more log output. Run with --scan to 
get full insights.
[2021-09-20T17:54:49.765Z] 
[2021-09-20T17:54:49.765Z] * Get more help at https://help.gradle.org
[2021-09-20T17:54:49.765Z] 
[2021-09-20T17:54:49.765Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 8.0.
[2021-09-20T17:54:49.765Z] 
[2021-09-20T17:54:49.765Z] You can use '--warning-mode all' to show the 
individual deprecation warnings and determine if they come from your own 
scripts or plugins.
[2021-09-20T17:54:49.765Z] 
[2021-09-20T17:54:49.765Z] See 
https://docs.gradle.org/7.2/userguide/command_line_interface.html#sec:command_line_warnings
[2021-09-20T17:54:49.765Z] 
[2021-09-20T17:54:49.765Z] Execution optimizations have been disabled for 2 
invalid unit(s) of work during this build to ensure correctness.
[2021-09-20T17:54:49.765Z] Please consult deprecation warnings for more details.
[2021-09-20T17:54:49.765Z] 
[2021-09-20T17:54:49.765Z] BUILD FAILED in 5m 7s
[2021-09-20T17:54:49.765Z] 215 actionable tasks: 173 executed, 42 up-to-date
[2021-09-20T17:54:49.765Z] 
[2021-09-20T17:54:49.765Z] See the profiling report at: 
file:///home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/build/reports/profile/profile-2021-09-20-17-49-44.html
[2021-09-20T17:54:49.765Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
Failed in branch JDK 11 and Scala 2.13
[2021-09-20T17:54:50.878Z] 
[2021-09-20T17:54:50.878Z] FAILURE: Build failed with an exception.
[2021-09-20T17:54:50.878Z] 
[2021-09-20T17:54:50.878Z] * What went wrong:
[2021-09-20T17:54:50.878Z] Execution failed for task ':core:compileTestScala'.
[2021-09-20T17:54:50.878Z] > Compilation failed
[2021-09-20T17:54:50.878Z] 
[2021-09-20T17:54:50.878Z] * Try:
[2021-09-20T17:54:50.878Z] Run with --stacktrace option to get the stack trace. 
Run with --info or --debug option to get more log output. Run with --scan to 
get full insights.
[2021-09-20T17:54:50.878Z] 
[2021-09-20T17:54:50.878Z] * Get more help at https://help.gradle.org
[2021-09-20T17:54:50.878Z] 
[2021-09-20T17:54:50.879Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 8.0.
[2021-09-20T17:54:50.879Z] 
[2021-09-20T17:54:50.879Z] You can use '--warning-mode all' to show the 
individual deprecation warnings and determine if they come from your own 
scripts or plugins.
[2021-09-20T17:54:50.879Z] 
[2021-09-20T17:54:50.879Z] See 
https://docs.gradle.org/7.2/userguide/command_line_interface.html#sec:command_line_warnings
[2021-09-20T17:54:50.879Z] 
[2021-09-20T17:54:50.879Z] Execution optimizations have been disabled for 2 
invalid unit(s) of work during this build to 

[jira] [Created] (KAFKA-13313) In KRaft mode, CreateTopic should return the topic configs in the response

2021-09-20 Thread Jun Rao (Jira)
Jun Rao created KAFKA-13313:
---

 Summary: In KRaft mode, CreateTopic should return the topic 
configs in the response
 Key: KAFKA-13313
 URL: https://issues.apache.org/jira/browse/KAFKA-13313
 Project: Kafka
  Issue Type: Bug
  Components: kraft
Affects Versions: 3.0.0
Reporter: Jun Rao


ReplicationControlManager.createTopic() doesn't seem to populate the configs in 
CreatableTopicResult. ZkAdminManager.createTopics() does that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-770: Replace "buffered.records.per.partition" with "input.buffer.max.bytes"

2021-09-20 Thread Sagar
Hi All,

Bumping this thread again.

Thanks!
Sagar.

On Sat, Sep 11, 2021 at 2:04 PM Sagar  wrote:

> Hi Mathias,
>
> I missed out on the metrics part.
>
> I have added the new metric in the proposed changes section along with the
> small re-wording that you talked about.
>
> Let me know if that makes sense.
>
> Thanks!
> Sagar.
>
> On Fri, Sep 10, 2021 at 3:45 AM Matthias J. Sax  wrote:
>
>> Thanks for the KIP.
>>
>> There was some discussion about adding a metric on the thread, but the
>> KIP does not contain anything about it. Did we drop this suggestion or
>> was the KIP not updated accordingly?
>>
>>
>> Nit:
>>
>> > This would be a global config applicable per processing topology
>>
>> Can we change this to `per Kafka Streams instance.`
>>
>> Atm, a Stream instance executes a single topology, so it does not make
>> any effective difference right now. However, it seems better (more
>> logical) to bind the config to the instance (not the topology the
>> instance executes).
>>
>>
>> -Matthias
>>
>> On 9/2/21 6:08 AM, Sagar wrote:
>> > Thanks Guozhang and Luke.
>> >
>> > I have updated the KIP with all the suggested changes.
>> >
>> > Do you think we could start voting for this?
>> >
>> > Thanks!
>> > Sagar.
>> >
>> > On Thu, Sep 2, 2021 at 8:26 AM Luke Chen  wrote:
>> >
>> >> Thanks for the KIP. Overall LGTM.
>> >>
>> >> Just one thought, if we "rename" the config directly as mentioned in
>> the
>> >> KIP, would that break existing applications?
>> >> Should we deprecate the old one first, and make the old/new names
>> co-exist
>> >> for some period of time?
>> >>
>> >> Public Interfaces
>> >>
>> >>- Adding a new config *input.buffer.max.bytes *applicable at a
>> topology
>> >>level. The importance of this config would be *Medium*.
>> >>- Renaming *cache.max.bytes.buffering* to
>> *statestore.cache.max.bytes*.
>> >>
>> >>
>> >>
>> >> Thank you.
>> >> Luke
>> >>
>> >> On Thu, Sep 2, 2021 at 1:50 AM Guozhang Wang 
>> wrote:
>> >>
>> >>> Currently the state store cache size default value is 10MB today,
>> which
>> >>> arguably is rather small. So I'm thinking maybe for this config
>> default
>> >> to
>> >>> 512MB.
>> >>>
>> >>> Other than that, LGTM.
>> >>>
>> >>> On Sat, Aug 28, 2021 at 11:34 AM Sagar 
>> >> wrote:
>> >>>
>>  Thanks Guozhang and Sophie.
>> 
>>  Yeah a small default value would lower the throughput. I didn't quite
>>  realise it earlier. It's slightly hard to predict this value so I
>> would
>>  guess around 1/2 GB to 1 GB? WDYT?
>> 
>>  Regarding the renaming of the config and the new metric, sure would
>> >>> include
>>  it in the KIP.
>> 
>>  Lastly, importance would also. be added. I guess Medium should be ok.
>> 
>>  Thanks!
>>  Sagar.
>> 
>> 
>>  On Sat, Aug 28, 2021 at 10:42 AM Sophie Blee-Goldman
>>   wrote:
>> 
>> > 1) I agree that we should just distribute the bytes evenly, at least
>> >>> for
>> > now. It's simpler to understand and
>> > we can always change it later, plus it makes sense to keep this
>> >> aligned
>> > with how the cache works today
>> >
>> > 2) +1 to being conservative in the generous sense, it's just not
>>  something
>> > we can predict with any degree
>> > of accuracy and even if we could, the appropriate value is going to
>>  differ
>> > wildly across applications and use
>> > cases. We might want to just pick some multiple of the default cache
>>  size,
>> > and maybe do some research on
>> > other relevant defaults or sizes (default JVM heap, size of
>> available
>> > memory in common hosts eg EC2
>> > instances, etc). We don't need to worry as much about erring on the
>> >>> side
>>  of
>> > too big, since other configs like
>> > the max.poll.records will help somewhat to keep it from exploding.
>> >
>> > 4) 100%, I always found the *cache.max.bytes.buffering* config name
>> >> to
>> >>> be
>> > incredibly confusing. Deprecating this in
>> > favor of "*statestore.cache.max.bytes*" and aligning it to the new
>> >>> input
>> > buffer config sounds good to me to include here.
>> >
>> > 5) The KIP should list all relevant public-facing changes, including
>> > metadata like the config's "Importance". Personally
>> > I would recommend Medium, or even High if we're really worried about
>> >>> the
>> > default being wrong for a lot of users
>> >
>> > Thanks for the KIP, besides those few things that Guozhang brought
>> up
>> >>> and
>> > the config importance, everything SGTM
>> >
>> > -Sophie
>> >
>> > On Thu, Aug 26, 2021 at 2:41 PM Guozhang Wang 
>>  wrote:
>> >
>> >> 1) I meant for your proposed solution. I.e. to distribute the
>>  configured
>> >> bytes among threads evenly.
>> >>
>> >> 2) I was actually thinking about making the default a large enough
>>  value
>> > so
>> >> that we would 

Re: [DISCUSS] KIP-776: Add Consumer#peek for debugging/tuning

2021-09-20 Thread Sagar
Thanks Luke for the KIP. I think it makes sense.

@Boyang,

While it is possible to get the functionality using manual offset commit +
offset position rewind as you stated, IMHO it could still be a very handy
addition to the APIs.  The way I see it, manual offset commit + offset
position rewind is for slightly more advanced users and the addition of
peek() API would make it trivial to get the mentioned functionality.

I agree to the point of adding a mechanism to fetch a more fine grained set
of records. Maybe, add another API which takes a Set? In
this case, we would probably need to add a behaviour to throw some
exception when a user tries to peek from a TopicPartition that he/she isn't
subscribed to.

nit: In the javadoc, this line =>

This method returns immediately if there are records available or
exception thrown.


should probably be =>


This method returns immediately if there are no records available or
exception thrown.


Thanks!

Sagar.




On Mon, Sep 20, 2021 at 4:22 AM Boyang Chen 
wrote:

> Thanks Luke for the KIP.
>
> I think I understand the motivation is to avoid affecting offset positions
> of the records, but the feature could be easily realized on the user side
> by using manual offset commit + offset position rewind. So the new peek()
> function doesn't provide any new functionality IMHO, weakening the
> motivation a bit.
>
> Additionally, for the peek() case, I believe that users may want to have
> more fine-grained exposure of records, such as from specific partitions
> instead of getting random records. It's probably useful to define an option
> handle class in the parameters to help clarify what specific records to be
> returned.
>
> Boyang
>
> On Sun, Sep 19, 2021 at 1:51 AM Luke Chen  wrote:
>
> > Hi everyone,
> >
> > I'd like to discuss the following proposal to add Consumer#peek for
> > debugging/tuning.
> >
> > The main purpose for Consumer#peek is to allow users:
> >
> >1. peek what records existed at broker side and not increasing the
> >position offsets.
> >2. throw exceptions when there is connection error existed between
> >consumer and broker (or other exceptions will be thrown by "poll")
> >
> >
> > detailed description can be found her:
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=188746244
> >
> >
> > Any comments and feedback are welcomed.
> >
> > Thank you.
> > Luke
> >
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #482

2021-09-20 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 4438 lines...]
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/log/LogCleanerParameterizedIntegrationTest.scala:134:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/log/LogCleanerParameterizedIntegrationTest.scala:188:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/log/LogConfigTest.scala:52:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/log/LogConfigTest.scala:78:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/log/LogLoaderTest.scala:229:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/log/LogLoaderTest.scala:470:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/log/LogLoaderTest.scala:527:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/log/LogLoaderTest.scala:584:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/log/LogLoaderTest.scala:831:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/log/LogLoaderTest.scala:1054:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/log/OffsetIndexTest.scala:52:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/log/UnifiedLogTest.scala:2306:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/log/UnifiedLogTest.scala:2328:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/server/DynamicBrokerConfigTest.scala:83:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/server/DynamicConfigChangeTest.scala:98:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/server/FetchRequestWithLegacyMessageFormatTest.scala:44:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala:402:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala:551:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/server/KafkaConfigTest.scala:829:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/server/LogDirFailureTest.scala:69:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/tools/MirrorMakerTest.scala:29:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] [Warn] 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk@2/core/src/test/scala/unit/kafka/utils/TestUtils.scala:341:
 @nowarn annotation does not suppress any warnings
[2021-09-20T16:53:18.402Z] 42 warnings found
[2021-09-20T16:53:18.402Z] 9 errors found
[2021-09-20T16:53:18.402Z] 
[2021-09-20T16:53:18.402Z] > Task :core:compileTestScala FAILED
[2021-09-20T16:53:19.334Z] > 

Re: [DISCUSS] KIP-714: Client metrics and observability

2021-09-20 Thread Feng Min
LGTM in terms of RPC separation and the new SubscriptionId to detect target
metric change on the server side.

On Tue, Sep 14, 2021 at 12:48 AM Magnus Edenhill  wrote:

> Thanks for your feedback Colin, see my updated proposal below.
>
>
> Den tors 22 juli 2021 kl 03:17 skrev Colin McCabe :
>
> > On Tue, Jun 29, 2021, at 07:22, Magnus Edenhill wrote:
> > > Den tors 17 juni 2021 kl 00:52 skrev Colin McCabe  >:
> > > > A few critiques:
> > > >
> > > > - As I wrote above, I think this could benefit a lot by being split
> > into
> > > > several RPCs. A registration RPC, a report RPC, and an unregister RPC
> > seem
> > > > like logical choices.
> > > >
> > >
> > > Responded to this in your previous mail, but in short I think a single
> > > request is sufficient and keeps the implementation complexity / state
> > down.
> > >
> >
> > Hi Magnus,
> >
> > I still suspect that trying to do everything with a single RPC is more
> > complex than using multiple RPCs.
> >
> > Can you go into more detail about how the client learns what metrics it
> > should send? This was the purpose of the "registration" step in my scheme
> > above.
> >
> > It seems quite awkward to combine an RPC for reporting metrics with and
> > RPC for finding out what metrics are configured to be reported. For
> > example, how would you build a tool to check what metrics are configured
> to
> > be reported? Does the tool have to report fake metrics, just because
> > there's no other way to get back that information? Seems wrong. (It would
> > be a bit like combining createTopics and listTopics for "simplicity")
> >
>
>
>
> Splitting up the API into separate data and control requests makes sense.
> With a split we would have one API for querying the broker for configured
> metrics subscriptions,
> and one API for pushing the collected metrics to the broker.
>
> A mechanism is still needed to notify the client when the subscription is
> changed;
> I’ve added a SubscriptionId for this purpose (which could be a checksum of
> the configured metrics subscription), this id is sent to the client along
> with the metrics subscription, and the client sends it back to the broker
> when pushing metrics. If the broker finds the pushed subscription id to
> differ from what is expected it will return an error to the client, which
> triggers the client to retrieve the new subscribed metrics and an updated
> subscription id. The generation of the subscriptionId is opaque to the
> client.
>
>
> Something like this:
>
> // Get the configured metrics subscription.
> GetTelemetrySubscriptionsRequest {
>StrNull  ClientInstanceId  // Null on first invocation to retrieve a
> newly generated instance id from the broker.
> }
>
> GetTelemetrySubscriptionsResponse {
>   Int16  ErrorCode
>   Int32  SubscriptionId   // This is used for comparison in
> PushTelemetryRequest. Could be a crc32 of the subscription.
>   StrClientInstanceId
>   Int8   AcceptedContentTypes
>   Array  SubscribedMetrics[] {
>   String MetricsPrefix
>   Int32  IntervalMs
>   }
> }
>
>
> The ContentType is a bitmask in this new proposal, high bits indicate
> compression:
>   0x01   OTLPv08
>   0x10   GZIP
>   0x40   ZSTD
>   0x80   LZ4
>
>
> // Push metrics
> PushTelemetryRequest {
>StrClientInstanceId
>Int32  SubscriptionId// The collected metrics in this request are
> based on the subscription with this Id.
>Int8   ContentType   // E.g., OTLPv08|ZSTD
>Bool   Terminating
>Binary Metrics
> }
>
>
> PushTelemetryResponse {
>Int32 ThrottleTime
>Int16 ErrorCode
> }
>
>
> An example run:
>
> 1. Client instance starts, connects to broker.
> 2. > GetTelemetrySubscriptionsRequest{ ClientInstanceId=Null } // Requests
> an instance id and the subscribed metrics.
> 3. < GetTelemetrySubscriptionsResponse{
>   ErrorCode = 0,
>   SubscriptionId = 0x234adf34,
>   ClientInstanceId = f00d-feed-deff-ceff--…,
>   AcceptedContentTypes = OTLPv08|ZSTD|LZ4,
>   SubscribeddMetrics[] = {
>  { “client.producer.tx.”, 6 },
>  { “client.memory.rss”, 90 },
>   }
>}
> 4. Client updates its metrics subscription, next push to fire in 60
> seconds.
> 5. 60 seconds passes
> 6. > PushTelemetryRequest{
>ClientInstanceId = f00d-feed-deff-ceff--….,
>SubscriptionId = 0x234adf34,
>ContentType = OTLPv8|ZSTD,
>Terminating = False,
>Metrics = …// zstd-compressed OTLPv08-protobuf-serialized metrics
>   }
> 7. < PushTelemetryResponse{ 0, NO_ERROR }
> 8. 60 seconds passes
> 9. > PushTelemetryRequest…
> …
> 56. The operator changes the configured metrics subscriptions (through
> Admin API).
> 57. > PushTelemetryRequest{ .. SubscriptionId = 0x234adf34 .. }
> 58. The subscriptionId no longer matches since the subscription has been
> updated, broker responds with an error:
> 59. < PushTelemetryResponse{ 0,   ERR_INVALID_SUBSCRIPTION_ID }
> 60. The error triggers the client to request the subscriptions 

[jira] [Resolved] (KAFKA-13254) Deadlock when expanding ISR

2021-09-20 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-13254.
-
Resolution: Fixed

> Deadlock when expanding ISR
> ---
>
> Key: KAFKA-13254
> URL: https://issues.apache.org/jira/browse/KAFKA-13254
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> Found this when debugging downgrade system test failures. The patch for 
> https://issues.apache.org/jira/browse/KAFKA-13091 introduced a deadlock. Here 
> are the jstack details:
> {code}
> "data-plane-kafka-request-handler-4": 
>   
>
>   waiting for ownable synchronizer 0xfcc00020, (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync),
>
>   which is held by "data-plane-kafka-request-handler-5"   
>   
>
> "data-plane-kafka-request-handler-5":
>   waiting for ownable synchronizer 0xc9161b20, (a 
> java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync),
>   which is held by "data-plane-kafka-request-handler-4"
> "data-plane-kafka-request-handler-4":
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xfcc00020> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
> at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
> at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
> at 
> java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
> at 
> kafka.server.DelayedOperation.safeTryComplete(DelayedOperation.scala:121)
> at 
> kafka.server.DelayedOperationPurgatory$Watchers.tryCompleteWatched(DelayedOperation.scala:362)
> at 
> kafka.server.DelayedOperationPurgatory.checkAndComplete(DelayedOperation.scala:264)
> at 
> kafka.cluster.DelayedOperations.checkAndCompleteAll(Partition.scala:59)
> at 
> kafka.cluster.Partition.tryCompleteDelayedRequests(Partition.scala:907)
> at 
> kafka.cluster.Partition.handleAlterIsrResponse(Partition.scala:1421)
> at 
> kafka.cluster.Partition.$anonfun$sendAlterIsrRequest$1(Partition.scala:1340)
> at 
> kafka.cluster.Partition.$anonfun$sendAlterIsrRequest$1$adapted(Partition.scala:1340)
> at kafka.cluster.Partition$$Lambda$1496/2055478409.apply(Unknown 
> Source)
> at kafka.server.ZkIsrManager.submit(ZkIsrManager.scala:74)
> at kafka.cluster.Partition.sendAlterIsrRequest(Partition.scala:1345)
> at kafka.cluster.Partition.expandIsr(Partition.scala:1312)
> at 
> kafka.cluster.Partition.$anonfun$maybeExpandIsr$2(Partition.scala:755)
> at kafka.cluster.Partition.maybeExpandIsr(Partition.scala:754)
> at 
> kafka.cluster.Partition.updateFollowerFetchState(Partition.scala:672)
> at 
> kafka.server.ReplicaManager.$anonfun$updateFollowerFetchState$1(ReplicaManager.scala:1806)
> at kafka.server.ReplicaManager$$Lambda$1075/1996432270.apply(Unknown 
> Source)
> at 
> scala.collection.StrictOptimizedIterableOps.map(StrictOptimizedIterableOps.scala:99)
> at 
> scala.collection.StrictOptimizedIterableOps.map$(StrictOptimizedIterableOps.scala:86)
> at scala.collection.mutable.ArrayBuffer.map(ArrayBuffer.scala:42)
> at 
> kafka.server.ReplicaManager.updateFollowerFetchState(ReplicaManager.scala:1790)
> at 
> kafka.server.ReplicaManager.readFromLog$1(ReplicaManager.scala:1025)
> at 
> kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:1029)
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:970)
> at kafka.server.KafkaApis.handle(KafkaApis.scala:173)
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:75)
> at java.lang.Thread.run(Thread.java:748)
> "data-plane-kafka-request-handler-5":
> at sun.misc.Unsafe.park(Native Method)
> - parking to wait for  <0xc9161b20> (a 
> 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.0 #135

2021-09-20 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 482813 lines...]
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[2021-09-20T10:11:56.758Z] 
[2021-09-20T10:11:56.758Z] SslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls() PASSED
[2021-09-20T10:11:56.758Z] 
[2021-09-20T10:11:56.758Z] SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe() STARTED
[2021-09-20T10:12:07.130Z] 
[2021-09-20T10:12:07.130Z] SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe() PASSED
[2021-09-20T10:12:07.130Z] 
[2021-09-20T10:12:07.130Z] SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign() STARTED
[2021-09-20T10:12:07.130Z] 
[2021-09-20T10:12:07.130Z] GroupEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl() PASSED
[2021-09-20T10:12:07.130Z] 
[2021-09-20T10:12:07.130Z] GroupEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe() STARTED
[2021-09-20T10:12:15.262Z] 
[2021-09-20T10:12:15.262Z] SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign() PASSED
[2021-09-20T10:12:15.262Z] 
[2021-09-20T10:12:15.262Z] SslEndToEndAuthorizationTest > testNoGroupAcl() 
STARTED
[2021-09-20T10:12:23.990Z] 
[2021-09-20T10:12:23.990Z] SslEndToEndAuthorizationTest > testNoGroupAcl() 
PASSED
[2021-09-20T10:12:23.990Z] 
[2021-09-20T10:12:23.990Z] SslEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl() STARTED
[2021-09-20T10:12:23.990Z] 
[2021-09-20T10:12:23.990Z] GroupEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe() PASSED
[2021-09-20T10:12:23.990Z] 
[2021-09-20T10:12:23.990Z] GroupEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials() STARTED
[2021-09-20T10:12:31.708Z] 
[2021-09-20T10:12:31.708Z] SslEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl() PASSED
[2021-09-20T10:12:31.708Z] 
[2021-09-20T10:12:31.708Z] SslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl() STARTED
[2021-09-20T10:12:38.671Z] 
[2021-09-20T10:12:38.671Z] GroupEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials() PASSED
[2021-09-20T10:12:38.671Z] 
[2021-09-20T10:12:38.671Z] SslConsumerTest > testCoordinatorFailover() STARTED
[2021-09-20T10:12:42.597Z] 
[2021-09-20T10:12:42.597Z] SslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl() PASSED
[2021-09-20T10:12:42.597Z] 
[2021-09-20T10:12:42.597Z] SslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe() STARTED
[2021-09-20T10:12:47.129Z] 
[2021-09-20T10:12:47.129Z] SslConsumerTest > testCoordinatorFailover() PASSED
[2021-09-20T10:12:47.129Z] 
[2021-09-20T10:12:47.129Z] SslConsumerTest > testSimpleConsumption() STARTED
[2021-09-20T10:12:54.435Z] 
[2021-09-20T10:12:54.435Z] SslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe() PASSED
[2021-09-20T10:12:54.435Z] 
[2021-09-20T10:12:54.435Z] UserQuotaTest > 
testProducerConsumerOverrideLowerQuota() STARTED
[2021-09-20T10:12:55.554Z] 
[2021-09-20T10:12:55.554Z] SslConsumerTest > testSimpleConsumption() PASSED
[2021-09-20T10:12:55.554Z] 
[2021-09-20T10:12:55.554Z] SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe() STARTED
[2021-09-20T10:13:10.345Z] 
[2021-09-20T10:13:10.345Z] SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe() PASSED
[2021-09-20T10:13:10.345Z] 
[2021-09-20T10:13:10.345Z] SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls() STARTED
[2021-09-20T10:13:20.292Z] 
[2021-09-20T10:13:20.292Z] SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeWithPrefixedAcls() PASSED
[2021-09-20T10:13:20.292Z] 
[2021-09-20T10:13:20.292Z] SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeViaAssign() STARTED
[2021-09-20T10:13:28.904Z] 
[2021-09-20T10:13:28.904Z] SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeViaAssign() PASSED
[2021-09-20T10:13:28.904Z] 
[2021-09-20T10:13:28.904Z] SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign() STARTED
[2021-09-20T10:13:35.658Z] 
[2021-09-20T10:13:35.658Z] SaslOAuthBearerSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaAssign() PASSED
[2021-09-20T10:13:35.658Z] 
[2021-09-20T10:13:35.658Z] SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl() STARTED
[2021-09-20T10:13:37.739Z] 
[2021-09-20T10:13:37.739Z] UserQuotaTest > 
testProducerConsumerOverrideLowerQuota() PASSED
[2021-09-20T10:13:37.739Z] 
[2021-09-20T10:13:37.740Z] UserQuotaTest > 
testProducerConsumerOverrideUnthrottled() STARTED
[2021-09-20T10:13:45.701Z] 
[2021-09-20T10:13:45.701Z] SaslOAuthBearerSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl() 

[GitHub] [kafka-site] rhauch merged pull request #374: Change 2.8.0 downloads to use archives

2021-09-20 Thread GitBox


rhauch merged pull request #374:
URL: https://github.com/apache/kafka-site/pull/374


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [ANNOUNCE] Apache Kafka 2.8.1

2021-09-20 Thread Randall Hauch
Thank you, David, for serving as the release manager for 2.8.1. Great work!

And congratulations to the Apache Kafka community!

Best regards,

Randall

On Mon, Sep 20, 2021 at 2:48 AM David Jacot  wrote:

> The Apache Kafka community is pleased to announce the release for
> Apache Kafka 2.8.1
>
> Apache Kafka 2.8.1 is a bugfix release and fixes 49 issues since the 2.8.0
> release. Please see the release notes for more information.
>
> All of the changes in this release can be found in the release notes:
> https://www.apache.org/dist/kafka/2.8.1/RELEASE_NOTES.html
>
>
> You can download the source and binary release (Scala 2.12 and 2.13) from:
> https://kafka.apache.org/downloads#2.8.1
>
>
> ---
>
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
>
> ** The Producer API allows an application to publish a stream records to
> one or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an
> output stream to one or more output topics, effectively transforming the
> input streams to output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might
> capture every change to a table.
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
> ** Building real-time streaming applications that transform or react
> to the streams of data.
>
>
> Apache Kafka is in use at large and small companies worldwide, including
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>
> A big thank you for the following 35 contributors to this release!
>
> A. Sophie Blee-Goldman, Alexander Iskuskov, Andras Katona, Bill Bejeck,
> Bruno Cadonna, Chris Egerton, Colin Patrick McCabe, David Arthur, David
> Jacot,
> Davor Poldrugo, Dejan Stojadinović, Geordie, Guozhang Wang, Ismael Juma,
> Jason Gustafson, Jeff Kim, John Gray, John Roesler, Justine Olshan,
> Konstantine Karantasis, Lee Dongjin, Luke Chen, Matthias J. Sax, Michael
> Carter,
> Mickael Maison, Phil Hardwick, Rajini Sivaram, Randall Hauch, Shay Elkin,
> Stanislav Vodetskyi, Tom Bentley, vamossagar12, wenbingshen, YiDing-Duke
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> https://kafka.apache.org/
>
> Thank you!
>
>
> Regards,
> David
>


[GitHub] [kafka-site] rhauch opened a new pull request #374: Change 2.8.0 downloads to use archives

2021-09-20 Thread GitBox


rhauch opened a new pull request #374:
URL: https://github.com/apache/kafka-site/pull/374


   Now that 2.8.1 has been released, we can change the 2.8.0 download links to 
use the archive site.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [VOTE] KIP-774: Deprecate public access to Admin client's *Result constructors

2021-09-20 Thread Josep Prat
Hi Tom,

Thanks for the KIP. It's a +1 (non binding) from my side.

Best,
———
Josep Prat

Aiven Deutschland GmbH

Immanuelkirchstraße 26, 10405 Berlin

Amtsgericht Charlottenburg, HRB 209739 B

Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen

m: +491715557497

w: aiven.io

e: josep.p...@aiven.io

On Mon, Sep 20, 2021, 14:45 Tom Bentley  wrote:

> Hi,
>
> I'd like to start a vote for KIP-774 which proposes to deprecate public
> access to the Admin client's *Result constructors:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-774%3A+Deprecate+public+access+to+Admin+client%27s+*Result+constructors
>
> Thanks for your time,
>
> Tom
>

On Mon, Sep 20, 2021, 14:45 Tom Bentley  wrote:

> Hi,
>
> I'd like to start a vote for KIP-774 which proposes to deprecate public
> access to the Admin client's *Result constructors:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-774%3A+Deprecate+public+access+to+Admin+client%27s+*Result+constructors
>
> Thanks for your time,
>
> Tom
>

On Mon, Sep 20, 2021, 14:45 Tom Bentley  wrote:

> Hi,
>
> I'd like to start a vote for KIP-774 which proposes to deprecate public
> access to the Admin client's *Result constructors:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-774%3A+Deprecate+public+access+to+Admin+client%27s+*Result+constructors
>
> Thanks for your time,
>
> Tom
>

On Mon, Sep 20, 2021, 14:45 Tom Bentley  wrote:

> Hi,
>
> I'd like to start a vote for KIP-774 which proposes to deprecate public
> access to the Admin client's *Result constructors:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-774%3A+Deprecate+public+access+to+Admin+client%27s+*Result+constructors
>
> Thanks for your time,
>
> Tom
>


[jira] [Resolved] (KAFKA-10643) Static membership - repetitive PreparingRebalance with updating metadata for member reason

2021-09-20 Thread Eran Levy (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eran Levy resolved KAFKA-10643.
---
Resolution: Cannot Reproduce

> Static membership - repetitive PreparingRebalance with updating metadata for 
> member reason
> --
>
> Key: KAFKA-10643
> URL: https://issues.apache.org/jira/browse/KAFKA-10643
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0
>Reporter: Eran Levy
>Priority: Major
> Attachments: broker-4-11.csv, client-4-11.csv, 
> client-d-9-11-11-2020.csv
>
>
> Kafka streams 2.6.0, brokers version 2.6.0. Kafka nodes are healthy, kafka 
> streams app is healthy. 
> Configured with static membership. 
> Every 10 minutes (I assume cause of topic.metadata.refresh.interval.ms), I 
> see the following group coordinator log for different stream consumers: 
> INFO [GroupCoordinator 2]: Preparing to rebalance group **--**-stream in 
> state PreparingRebalance with old generation 12244 (__consumer_offsets-45) 
> (reason: Updating metadata for member 
> -stream-11-1-013edd56-ed93-4370-b07c-1c29fbe72c9a) 
> (kafka.coordinator.group.GroupCoordinator)
> and right after that the following log: 
> INFO [GroupCoordinator 2]: Assignment received from leader for group 
> **-**-stream for generation 12246 (kafka.coordinator.group.GroupCoordinator)
>  
> Looked a bit on the kafka code and Im not sure that I get why such a thing 
> happening - is this line described the situation that happens here re the 
> "reason:"?[https://github.com/apache/kafka/blob/7ca299b8c0f2f3256c40b694078e422350c20d19/core/src/main/scala/kafka/coordinator/group/GroupCoordinator.scala#L311]
> I also dont see it happening too often in other kafka streams applications 
> that we have. 
> The only thing suspicious that I see around every hour that different pods of 
> that kafka streams application throw this exception: 
> {"timestamp":"2020-10-25T06:44:20.414Z","level":"INFO","thread":"**-**-stream-94561945-4191-4a07-ac1b-07b27e044402-StreamThread-1","logger":"org.apache.kafka.clients.FetchSessionHandler","message":"[Consumer
>  
> clientId=**-**-stream-94561945-4191-4a07-ac1b-07b27e044402-StreamThread-1-restore-consumer,
>  groupId=null] Error sending fetch request (sessionId=34683236, epoch=2872) 
> to node 
> 3:","context":"default","exception":"org.apache.kafka.common.errors.DisconnectException:
>  null\n"}
> I came across this strange behaviour after stated to investigate a strange 
> stuck rebalancing state after one of the members left the group and caused 
> the rebalance to stuck - the only thing that I found is that maybe because 
> that too often preparing to rebalance states, the app might affected of this 
> bug - KAFKA-9752 ?
> I dont understand why it happens, it wasn't before I applied static 
> membership to that kafka streams application (since around 2 weeks ago). 
> Will be happy if you can help me
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[VOTE] KIP-774: Deprecate public access to Admin client's *Result constructors

2021-09-20 Thread Tom Bentley
Hi,

I'd like to start a vote for KIP-774 which proposes to deprecate public
access to the Admin client's *Result constructors:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-774%3A+Deprecate+public+access+to+Admin+client%27s+*Result+constructors

Thanks for your time,

Tom


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #481

2021-09-20 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #84

2021-09-20 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #480

2021-09-20 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 494098 lines...]
[2021-09-20T09:21:13.524Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsNotExistingGroup() STARTED
[2021-09-20T09:21:15.779Z] 
[2021-09-20T09:21:15.779Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsNotExistingGroup() PASSED
[2021-09-20T09:21:15.779Z] 
[2021-09-20T09:21:15.779Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsExistingTopicSelectedGroups() STARTED
[2021-09-20T09:21:28.519Z] 
[2021-09-20T09:21:28.519Z] DescribeAuthorizedOperationsTest > 
testTopicAuthorizedOperations() PASSED
[2021-09-20T09:21:28.519Z] 
[2021-09-20T09:21:28.519Z] DescribeAuthorizedOperationsTest > 
testConsumerGroupAuthorizedOperations() STARTED
[2021-09-20T09:21:35.032Z] 
[2021-09-20T09:21:35.032Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsExistingTopicSelectedGroups() PASSED
[2021-09-20T09:21:35.032Z] 
[2021-09-20T09:21:35.032Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsToZonedDateTime() STARTED
[2021-09-20T09:21:45.657Z] 
[2021-09-20T09:21:45.657Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsToZonedDateTime() PASSED
[2021-09-20T09:21:45.657Z] 
[2021-09-20T09:21:45.657Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliest() STARTED
[2021-09-20T09:21:48.049Z] 
[2021-09-20T09:21:48.049Z] DescribeAuthorizedOperationsTest > 
testConsumerGroupAuthorizedOperations() PASSED
[2021-09-20T09:21:48.049Z] 
[2021-09-20T09:21:48.049Z] ListOffsetsIntegrationTest > 
testMaxTimestampOffset() STARTED
[2021-09-20T09:21:48.049Z] 
[2021-09-20T09:21:48.049Z] ListOffsetsIntegrationTest > 
testMaxTimestampOffset() PASSED
[2021-09-20T09:21:48.049Z] 
[2021-09-20T09:21:48.049Z] ListOffsetsIntegrationTest > testLatestOffset() 
STARTED
[2021-09-20T09:21:50.486Z] 
[2021-09-20T09:21:50.486Z] ListOffsetsIntegrationTest > testLatestOffset() 
PASSED
[2021-09-20T09:21:50.486Z] 
[2021-09-20T09:21:50.486Z] ListOffsetsIntegrationTest > testEarliestOffset() 
STARTED
[2021-09-20T09:21:52.924Z] 
[2021-09-20T09:21:52.924Z] ListOffsetsIntegrationTest > testEarliestOffset() 
PASSED
[2021-09-20T09:21:53.462Z] 
[2021-09-20T09:21:53.462Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliest() PASSED
[2021-09-20T09:21:53.462Z] 
[2021-09-20T09:21:53.462Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsExportImportPlan() STARTED
[2021-09-20T09:21:54.053Z] 
[2021-09-20T09:21:54.053Z] 1348 tests completed, 1 failed, 7 skipped
[2021-09-20T09:21:54.053Z] There were failing tests. See the report at: 
file:///home/jenkins/workspace/Kafka_kafka_trunk/core/build/reports/tests/integrationTest/index.html
[2021-09-20T09:21:54.053Z] 
[2021-09-20T09:21:54.053Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 8.0.
[2021-09-20T09:21:54.053Z] 
[2021-09-20T09:21:54.053Z] You can use '--warning-mode all' to show the 
individual deprecation warnings and determine if they come from your own 
scripts or plugins.
[2021-09-20T09:21:54.053Z] 
[2021-09-20T09:21:54.053Z] See 
https://docs.gradle.org/7.2/userguide/command_line_interface.html#sec:command_line_warnings
[2021-09-20T09:21:54.053Z] 
[2021-09-20T09:21:54.053Z] BUILD SUCCESSFUL in 1h 52m 2s
[2021-09-20T09:21:54.053Z] 202 actionable tasks: 109 executed, 93 up-to-date
[2021-09-20T09:21:55.079Z] 
[2021-09-20T09:21:55.079Z] See the profiling report at: 
file:///home/jenkins/workspace/Kafka_kafka_trunk/build/reports/profile/profile-2021-09-20-07-29-56.html
[2021-09-20T09:21:55.079Z] A fine-grained performance profile is available: use 
the --scan option.
[Pipeline] junit
[2021-09-20T09:21:55.860Z] Recording test results
[2021-09-20T09:22:10.452Z] [Checks API] No suitable checks publisher found.
[Pipeline] echo
[2021-09-20T09:22:10.453Z] Skipping Kafka Streams archetype test for Java 17
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[2021-09-20T09:22:12.321Z] 
[2021-09-20T09:22:12.322Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsExportImportPlan() PASSED
[2021-09-20T09:22:12.322Z] 
[2021-09-20T09:22:12.322Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsToSpecificOffset() STARTED
[2021-09-20T09:22:19.197Z] 
[2021-09-20T09:22:19.197Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsToSpecificOffset() PASSED
[2021-09-20T09:22:19.197Z] 
[2021-09-20T09:22:19.197Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftPlus() STARTED
[2021-09-20T09:22:28.163Z] 
[2021-09-20T09:22:28.163Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftPlus() PASSED
[2021-09-20T09:22:28.164Z] 
[2021-09-20T09:22:28.164Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsToLatest() STARTED
[2021-09-20T09:22:36.688Z] 
[2021-09-20T09:22:36.688Z] ResetConsumerGroupOffsetTest > 
testResetOffsetsToLatest() PASSED

[jira] [Created] (KAFKA-13312) 'NetworkDegradeTest#test_rate' should wait until iperf server is listening

2021-09-20 Thread David Jacot (Jira)
David Jacot created KAFKA-13312:
---

 Summary: 'NetworkDegradeTest#test_rate' should wait until iperf 
server is listening
 Key: KAFKA-13312
 URL: https://issues.apache.org/jira/browse/KAFKA-13312
 Project: Kafka
  Issue Type: Bug
Reporter: David Jacot
Assignee: David Jacot


We have seen multiple failures with the following logs:
{noformat}
[DEBUG - 2021-09-17 09:57:58,603 - remoteaccount - _log - lineno:160]: 
ubuntu@worker26: Running ssh command: iperf -i 1 -t 20 -f k -c worker25 [INFO - 
2021-09-17 09:57:58,611 - network_degrade_test - test_rate - lineno:114]: iperf 
output connect failed: Connection refused
{noformat}
The iperf client can not connect to the iperf server. The test launches the 
server and then immediately launches the client but there is not guarantee that 
the server listens when the client is started.

It seems that we should add a condition to wait until the server prints `Server 
listening` before starting the client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[ANNOUNCE] Apache Kafka 2.8.1

2021-09-20 Thread David Jacot
The Apache Kafka community is pleased to announce the release for
Apache Kafka 2.8.1

Apache Kafka 2.8.1 is a bugfix release and fixes 49 issues since the 2.8.0
release. Please see the release notes for more information.

All of the changes in this release can be found in the release notes:
https://www.apache.org/dist/kafka/2.8.1/RELEASE_NOTES.html


You can download the source and binary release (Scala 2.12 and 2.13) from:
https://kafka.apache.org/downloads#2.8.1

---


Apache Kafka is a distributed streaming platform with four core APIs:


** The Producer API allows an application to publish a stream records to
one or more Kafka topics.

** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.

** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an
output stream to one or more output topics, effectively transforming the
input streams to output streams.

** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might
capture every change to a table.


With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data
between systems or applications.

** Building real-time streaming applications that transform or react
to the streams of data.


Apache Kafka is in use at large and small companies worldwide, including
Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
Target, The New York Times, Uber, Yelp, and Zalando, among others.

A big thank you for the following 35 contributors to this release!

A. Sophie Blee-Goldman, Alexander Iskuskov, Andras Katona, Bill Bejeck,
Bruno Cadonna, Chris Egerton, Colin Patrick McCabe, David Arthur, David Jacot,
Davor Poldrugo, Dejan Stojadinović, Geordie, Guozhang Wang, Ismael Juma,
Jason Gustafson, Jeff Kim, John Gray, John Roesler, Justine Olshan,
Konstantine Karantasis, Lee Dongjin, Luke Chen, Matthias J. Sax, Michael Carter,
Mickael Maison, Phil Hardwick, Rajini Sivaram, Randall Hauch, Shay Elkin,
Stanislav Vodetskyi, Tom Bentley, vamossagar12, wenbingshen, YiDing-Duke

We welcome your help and feedback. For more information on how to
report problems, and to get involved, visit the project website at
https://kafka.apache.org/

Thank you!


Regards,
David


Re: MM2 - Overriding MirrorSourceConnector Consumer & Producer values

2021-09-20 Thread Jamie
Hi All, 
Has anyone been able to override the producer and consumer values of the MM2 
MirrorSourceConnector as described below?
Many Thanks, 
Jamie


-Original Message-
From: Jamie 
To: us...@kafka.apache.org 
Sent: Thu, 16 Sep 2021 15:52
Subject: MM2 - Overriding MirrorSourceConnector Consumer & Producer values

Hi All, 
I've trying to override the properties of the consumer and producer in MM2 to 
tune them for high throughput. For example, I'm trying to set the consumers 
fetch.min.bytes to 10. 
I'm running MM2 in a dedicated mirror maker cluster 
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0#KIP382:MirrorMaker2.0-RunningadedicatedMirrorMakercluster)
 and using version 2.7.1 of Kafka. 
I have the equivalent of the following in my mirror maker properties file:
    clusters = A, B
    A.bootstrap.servers = A_host1:9092, A_host2:9092, A_host3:9092    
B.bootstrap.servers = B_host1:9092, B_host2:9092, B_host3:9092
    # enable and configure individual replication flows    A->B.enabled = true
    # regex which defines which topics gets replicated. For eg "foo-.*"    
A->B.topics = .test-topic

I'm trying to override the properties of the consumer which fetches records 
from cluster "A" and the producer that sends records to cluster "B". 
I've tried the following in the config file:
    A.consumer.fetch.min.bytes = 10
    A->B.consumer.fetch.min.bytes = 10
    A.fetch.min.bytes = 10
    B.consumer.fetch.min.bytes = 10
    B.fetch.min.bytes = 10
None of which seem to work, when I start MM2 and go into the logs and look at 
the value using the the MirrorSourceConnector tasks consumer and producer 
config I still see the default value for fetch.min.bytes (1) being used.
Am I trying to override the values of the consumer incorrectly or do I need to 
set these in a different place?
Many Thanks, 
Jamie 





Re: Contribution help please

2021-09-20 Thread Luke Chen
Hi Jon,
I checked your PR. I think you didn't set the "indent" correctly.
It looks like your indent setting is to 2, but we use 4.
Please update the setting in IntelliJ.
REF: https://www.jetbrains.com/help/idea/reformat-and-rearrange-code.html

Thank you.
Luke

On Mon, Sep 20, 2021 at 2:47 PM Jon McEwen  wrote:

> Hello Kafka folks,
>
> Could somone please tell me how to correctly format kafka Java code?
> Either on command line or in IntelliJ.
>
> I'm working on KAFKA-13303.
>
> Many thanks
>
> Jon McEwen
>
> https://github.com/jonmcewen
>


[jira] [Created] (KAFKA-13311) MM2 should allow propagating arbitrary global configurations to the Connector config

2021-09-20 Thread Daniel Urban (Jira)
Daniel Urban created KAFKA-13311:


 Summary: MM2 should allow propagating arbitrary global 
configurations to the Connector config
 Key: KAFKA-13311
 URL: https://issues.apache.org/jira/browse/KAFKA-13311
 Project: Kafka
  Issue Type: Improvement
  Components: mirrormaker
Affects Versions: 2.8.1
Reporter: Daniel Urban
Assignee: Daniel Urban


Currently, the configuration propagation logic in MM2 only allows a handful of 
configurations to be applied to all Connector configs managed by MM2.

In some cases (e.g. custom topic or group filters, metric reporters, etc.) it 
would be useful to be able to define configurations "globally", without 
prefixing the config with each replication.

E.g. the "connectors." prefix could be used to declare global Connector configs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Apache Kafka 3.1.0 release

2021-09-20 Thread Josep Prat
+1 (non-binding). Thanks for volunteering David!

Best,

On Sun, Sep 19, 2021 at 12:14 AM Israel Ekpo  wrote:

> Forgot to confirm in my last message, though it was implied. I am a +1 as
> well. Thank you, David.
>
> On Sat, Sep 18, 2021 at 8:54 AM Israel Ekpo  wrote:
>
> > Thanks for volunteering David. It’s great that we are already planning
> the
> > 3.1.0 release.
> >
> >
> > On Thu, Sep 16, 2021 at 3:38 PM Bill Bejeck  wrote:
> >
> >> Thanks for volunteering for the 3.1.0 release David!
> >>
> >> It's a +1 from me.
> >>
> >> -Bill
> >>
> >> On Thu, Sep 16, 2021 at 3:08 PM Konstantine Karantasis <
> >> kkaranta...@apache.org> wrote:
> >>
> >> > Thanks for volunteering to run 3.1.0 David!
> >> >
> >> > +1
> >> >
> >> > Konstantine
> >> >
> >> >
> >> > On Thu, Sep 16, 2021 at 6:42 PM Ismael Juma 
> wrote:
> >> >
> >> > > +1, thanks for volunteering David!
> >> > >
> >> > > Ismael
> >> > >
> >> > > On Thu, Sep 16, 2021, 6:47 AM David Jacot
>  >> >
> >> > > wrote:
> >> > >
> >> > > > Hello All,
> >> > > >
> >> > > > I'd like to volunteer to be the release manager for our next
> >> > > > feature release, 3.1.0. If there are no objections, I'll send
> >> > > > out the release plan soon.
> >> > > >
> >> > > > Regards,
> >> > > > David
> >> > > >
> >> > >
> >> >
> >>
> >
>


-- 

Josep Prat

*Aiven Deutschland GmbH*

Immanuelkirchstraße 26, 10405 Berlin

Amtsgericht Charlottenburg, HRB 209739 B

Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen

*m:* +491715557497

*w:* aiven.io

*e:* josep.p...@aiven.io


Contribution help please

2021-09-20 Thread Jon McEwen
Hello Kafka folks,

Could somone please tell me how to correctly format kafka Java code?  Either on 
command line or in IntelliJ.

I'm working on KAFKA-13303.

Many thanks

Jon McEwen

https://github.com/jonmcewen