[jira] [Created] (KAFKA-14000) Kafka-connect standby server shows empty tasks list

2022-06-15 Thread Xinyu Zou (Jira)
Xinyu Zou created KAFKA-14000:
-

 Summary: Kafka-connect standby server shows empty tasks list
 Key: KAFKA-14000
 URL: https://issues.apache.org/jira/browse/KAFKA-14000
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.6.0
Reporter: Xinyu Zou


I'm using Kafka-connect distributed mode. There're two servers. One active and 
one standby. The standby server sometimes shows empty tasks list in status rest 
API response.

curl host:8443/connectors/name1/status
{code:java}
{
    "connector": {
        "state": "RUNNING",
        "worker_id": "1.2.3.4:10443"
    },
    "name": "name1",
    "tasks": [],
    "type": "source"
} {code}
I enabled TRACE log and checked. As required, the connect-status topic is set 
to cleanup.policy=compact. But messages in the topic won't be compacted timely. 
They will be compacted in a specific interval. So usually there're more than 
one messages with same key. E.g. When kafka-connect is launched there's no 
connector running. And then we start a new connector. Then there will be two 
messages in connect-status topic:

status-task-name1 : state=RUNNING, workerId='10.251.170.166:10443', 
generation=100

status-task-name1 : __

 

When reading status from connect-status topic, it doesn't sort messages by 
generation.

[https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/ConsumerRecords.java]

So I think this could be improved. We can either sort the messages after poll 
or compare generation value before we choose correct status message.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


Re: [VOTE] KIP-842: Add richer group offset reset mechanisms

2022-06-15 Thread deng ziming
Thank you for this KIP,
+1 (non-binding)

--
Ziming

> On Jun 15, 2022, at 8:54 PM, hudeqi <16120...@bjtu.edu.cn> wrote:
> 
> Hi all,
> 
> I'd like to start a vote on KIP-842to add some group offset reset mechanisms. 
> Details can be found here: https://cwiki.apache.org/confluence/x/xhyhD
> 
> Any feedback is appreciated.
> 
> Thank you.
> 
> hudeqi
> 
> 



[DISCUSS] KIP-847: Add ProducerCount metrics

2022-06-15 Thread Artem Livshits
Hello,

I'd like to start a discussion on the KIP-847:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-847%3A+Add+ProducerCount+metrics
.

-Artem


Re: [DISCUSS] KIP-821: Connect Transforms support for nested structures

2022-06-15 Thread Jorge Esteban Quilcate Otoya
Thanks, Chris. Your feedback is much appreciated!

I see how the current proposal might be underestimating some edge cases.
I'm happy to move the design for deep-scan and multi-values to future
developments related with this KIP and reduce its scope, though open for
more feedback.

Also, just to be sure, are you proposing also to not include array access
at this stage?

Thanks,
Jorge.

On Tue, 14 Jun 2022 at 03:20, Chris Egerton  wrote:

> Hi Jorge,
>
> I've done some more thinking and I hate to say it, but I think the syntax
> does need to be expanded. Right now it's clear what "a.b" refers to and
> what "a..b" refers to, but what about "a...b"? Is that referring to
> subfield ".b" of field "a", or subfield "b" of field "a."? This gets even
> more complicated when thinking about fields whose names are exclusively
> made up of dots.
>
> I'm also a little hesitant to mix the cases of multi-value paths and deep
> scans. What if you only want to access one subfield deep for an SMT,
> instead of recursing through all the children of a given field? It's akin
> to the distinction between * and ** with file globbing patterns, and there
> could be a substantial performance difference if you have heavily-nested
> fields.
>
> Ultimately, I think that if the proposed "field.syntax.version" property
> sits well with people, it might be better to reduce the scope of the KIP
> back to the original proposal and just focus on adding support for
> explicitly-specified nested values, with no multi-value paths whatsoever,
> knowing that we have an easy way to introduce new syntax and features in
> the future. (We could probably leave the "a...b" case for that next version
> too.)
>
> I was a huge fan of this KIP before we started trying to address more
> complex use cases, and although I don't want to write those off, I think we
> may have bitten off more than we can chew in time for the 3.3.0 release and
> would hate to see this KIP get delayed as a result.
>
> I'd be really curious to hear from Joshua and Tom on this front, though. Is
> it acceptable to move more incrementally here and settle on the syntax
> version property as our means of introducing new features, or is it
> preferable to implement things monolithically and try to get everything (or
> at least, as much as possible) right the first time?
>
> Thanks again for your continued effort on this KIP!
>
> Cheers,
>
> Chris
>
> On Wed, Jun 8, 2022 at 5:41 PM Jorge Esteban Quilcate Otoya <
> quilcate.jo...@gmail.com> wrote:
>
> > Thanks, Chris!
> >
> > Please, find my comments below:
> >
> > On Tue, 7 Jun 2022 at 04:39, Chris Egerton 
> > wrote:
> >
> > > Hi Jorge,
> > >
> > > Thanks! Sorry for the delay; here are my thoughts:
> > >
> > > 1. Under the "Accessing multiple values by deep-scan" header it's
> stated
> > > that "If deep-scan is used, it must have only one field after the
> > asterisk
> > > level.". However, in example 3 for the Cast SMT and other examples for
> > > other SMTs, the spec contains a field of "*.child.k2", which appears to
> > > have two fields after the asterisk level. I may be misunderstanding the
> > > proposal, but it seems like the two contradict each other.
> > >
> >
> > Thanks for catching this. I have clarified it by removing this
> restriction.
> > Also, have extended the deep-scan scenarios.
> >
> >
> > >
> > > 2. I'm a little unclear on why we need the special handling for arrays
> > > where, for an array field "a", the field name "a" can be treated as
> > either
> > > the array itself, or every element in the array. Is there a reason we
> > can't
> > > use the field name "a.*" to handle the latter case, and "a" to handle
> the
> > > former?
> > >
> >
> > Agree, this is confusing. I like the `a.*` approach to access array
> items.
> > I have added this to the proposal.
> >
> >
> > >
> > > 3. How would a user specify that they'd like to access a field with the
> > > literal name "*"?
> > >
> >
> > Good one. I'm proposing an approach similar to how it's proposed to
> escape
> > dots, with a double-asterisk. Curious on your thoughts around this.
> >
> >
> > >
> > > 4. For the Cast SMT, do you think it might bite some people if fields
> > that
> > > can't be cast correctly are silently ignored? I'm imagining the case
> > where
> > > none of the fields in a multi-path expression can be cast correctly and
> > it
> > > ends up eating half of someone's day to track down why their SMT isn't
> > > doing anything.
> > >
> >
> > If I understand correctly, this challenge could be relevant across SMTs.
> > At the moment, most/all? SMTs just silently ignore.
> > Was thinking about adding a flag `field.on.path.not.found` to either
> ignore
> > or fail when no paths are found. What do your think?
> >
> >
> > >
> > > 5. For the ExtractField and ValueToKey SMTs, what happens if a
> deep-scan
> > > field name is used, but only one field is found? Is the resulting field
> > > still an array, or is it just the single field that was found? (FWIW
> I'm

[jira] [Created] (KAFKA-13999) Add ProducerCount metrics (KIP-847)

2022-06-15 Thread Artem Livshits (Jira)
Artem Livshits created KAFKA-13999:
--

 Summary: Add ProducerCount metrics (KIP-847)
 Key: KAFKA-13999
 URL: https://issues.apache.org/jira/browse/KAFKA-13999
 Project: Kafka
  Issue Type: Improvement
Reporter: Artem Livshits


See 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-847%3A+Add+ProducerCount+metrics



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (KAFKA-13998) JoinGroupRequestData 'reason' can be too large

2022-06-15 Thread Jim Hughes (Jira)
Jim Hughes created KAFKA-13998:
--

 Summary: JoinGroupRequestData 'reason' can be too large
 Key: KAFKA-13998
 URL: https://issues.apache.org/jira/browse/KAFKA-13998
 Project: Kafka
  Issue Type: Bug
Affects Versions: 3.2.0
Reporter: Jim Hughes
Assignee: Jim Hughes


We saw an exception like this: 

```org.apache.kafka.streams.errors.StreamsException: 
java.lang.RuntimeException: 'reason' field is too long to be serialized 3 at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:627)
 4 at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:551)
 5Caused by: java.lang.RuntimeException: 'reason' field is too long to be 
serialized 6 at 
org.apache.kafka.common.message.JoinGroupRequestData.addSize(JoinGroupRequestData.java:465)
 7 at 
org.apache.kafka.common.protocol.SendBuilder.buildSend(SendBuilder.java:218) 8 
at 
org.apache.kafka.common.protocol.SendBuilder.buildRequestSend(SendBuilder.java:187)
 9 at 
org.apache.kafka.common.requests.AbstractRequest.toSend(AbstractRequest.java:101)
 10 at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:524) 11 
at org.apache.kafka.clients.NetworkClient.doSend(NetworkClient.java:500) 12 at 
org.apache.kafka.clients.NetworkClient.send(NetworkClient.java:460) 13 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.trySend(ConsumerNetworkClient.java:499)
 14 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:255)
 15 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
 16 at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
 17 at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:437)
 18 at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:371)
 19 at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:542)
 20 at 
org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1271)
 21 at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1235) 
22 at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1215) 
23 at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:969)
 24 at 
org.apache.kafka.streams.processor.internals.StreamThread.pollPhase(StreamThread.java:917)
 25 at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:736)
 26 at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:589)
 27 ... 1 more```

This appears to be caused by the code passing an entire stack trace in the 
`rejoinReason`.  See 
https://github.com/apache/kafka/blob/3.2.0/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java#L481



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


Re: [DISCUSS] Apache Kafka 3.3.0 Release

2022-06-15 Thread José Armando García Sancio
Hi all,

This is a friendly reminder that the KIP freeze date is today, June 15th, 2022.

The feature freeze date is July 6th, 2022.

Thanks,
-José


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #1009

2022-06-15 Thread Apache Jenkins Server
See 




[jira] [Reopened] (KAFKA-13888) KIP-836: Addition of Information in DescribeQuorumResponse about Voter Lag

2022-06-15 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson reopened KAFKA-13888:
-

> KIP-836: Addition of Information in DescribeQuorumResponse about Voter Lag
> --
>
> Key: KAFKA-13888
> URL: https://issues.apache.org/jira/browse/KAFKA-13888
> Project: Kafka
>  Issue Type: Improvement
>  Components: kraft
>Reporter: Niket Goel
>Assignee: lqjacklee
>Priority: Major
> Fix For: 3.3.0
>
>
> Tracking issue for the implementation of KIP:836



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (KAFKA-13888) KIP-836: Addition of Information in DescribeQuorumResponse about Voter Lag

2022-06-15 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-13888.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

> KIP-836: Addition of Information in DescribeQuorumResponse about Voter Lag
> --
>
> Key: KAFKA-13888
> URL: https://issues.apache.org/jira/browse/KAFKA-13888
> Project: Kafka
>  Issue Type: Improvement
>  Components: kraft
>Reporter: Niket Goel
>Assignee: lqjacklee
>Priority: Major
> Fix For: 3.3.0
>
>
> Tracking issue for the implementation of KIP:836



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (KAFKA-13997) one partition logs are not getting purged

2022-06-15 Thread Naveen P (Jira)
Naveen P created KAFKA-13997:


 Summary: one partition logs are not getting purged 
 Key: KAFKA-13997
 URL: https://issues.apache.org/jira/browse/KAFKA-13997
 Project: Kafka
  Issue Type: Bug
  Components: log cleaner
Affects Versions: 2.0.0
Reporter: Naveen P


We have issue with one of our topic in kafka cluster, which is taking huge 
space, while checking found one of the partion doesn't have old logs getting 
purged, due to this space is getting full. 

 

Since we have replication factor 3 for this topic, same behaviour is observed 
on 2 kafka broker nodes. We need workaround to cleanup old log messages and 
also find root cause for this issue.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (KAFKA-13996) log.cleaner.io.max.bytes.per.second cannot be changed dynamically

2022-06-15 Thread Tomonari Yamashita (Jira)
Tomonari Yamashita created KAFKA-13996:
--

 Summary: log.cleaner.io.max.bytes.per.second cannot be changed 
dynamically 
 Key: KAFKA-13996
 URL: https://issues.apache.org/jira/browse/KAFKA-13996
 Project: Kafka
  Issue Type: Bug
  Components: config, core, log cleaner
Affects Versions: 3.2.0
Reporter: Tomonari Yamashita
Assignee: Tomonari Yamashita


- log.cleaner.io.max.bytes.per.second cannot be changed dynamically using 
bin/kafka-configs.sh
- Reproduction procedure:
-# Create a topic with cleanup.policy=compact
{code:java}
bin/kafka-topics.sh --bootstrap-server localhost:9092 --create 
--replication-factor 1 --partitions 1 --topic my-topic --config 
cleanup.policy=compact --config cleanup.policy=compact --config 
segment.bytes=104857600 --config compression.type=producer
{code}
-# Change log.cleaner.io.max.bytes.per.second=10485760 using 
bin/kafka-configs.sh
{code:java}
bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers 
--entity-default --alter --add-config 
log.cleaner.io.max.bytes.per.second=10485760
{code}
-# Send enough messages(> segment.bytes=104857600) to activate Log Cleaner
-# logs/log-cleaner.log, configuration by 
log.cleaner.io.max.bytes.per.second=10485760 is not reflected and Log Cleaner 
does not slow down (>= log.cleaner.io.max.bytes.per.second=10485760).
{code:java}
[2022-06-15 14:52:14,988] INFO [kafka-log-cleaner-thread-0]:
Log cleaner thread 0 cleaned log my-topic-0 (dirty section = [39786, 
81666])
3,999.0 MB of log processed in 2.7 seconds (1,494.4 MB/sec).
Indexed 3,998.9 MB in 0.9 seconds (4,218.2 Mb/sec, 35.4% of total time)
Buffer utilization: 0.0%
Cleaned 3,999.0 MB in 1.7 seconds (2,314.2 Mb/sec, 64.6% of total time)
Start size: 3,999.0 MB (41,881 messages)
End size: 0.1 MB (1 messages)
100.0% size reduction (100.0% fewer messages)
 (kafka.log.LogCleaner)
{code}
- Problem cause:
-- log.cleaner.io.max.bytes.per.second is used in Throttler in LogCleaner, 
however, it is only passed to Throttler at initialization time.
--- 
https://github.com/apache/kafka/blob/4380eae7ceb840dd93fee8ec90cd89a72bad7a3f/core/src/main/scala/kafka/log/LogCleaner.scala#L107-L112
-- Need to change Throttler configuration value at reconfigure() of LogCleaner.
 --- 
https://github.com/apache/kafka/blob/4380eae7ceb840dd93fee8ec90cd89a72bad7a3f/core/src/main/scala/kafka/log/LogCleaner.scala#L192-L196
- A workaround is that restarting every broker adding 
log.cleaner.io.max.bytes.per.second to config/server.properties



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (KAFKA-13995) Does Kafka support Network File System (NFS)? Is it recommended in Production?

2022-06-15 Thread Devarshi Shah (Jira)
Devarshi Shah created KAFKA-13995:
-

 Summary: Does Kafka support Network File System (NFS)? Is it 
recommended in Production?
 Key: KAFKA-13995
 URL: https://issues.apache.org/jira/browse/KAFKA-13995
 Project: Kafka
  Issue Type: Test
Affects Versions: 3.0.0
 Environment: Kubernetes Cluster
Reporter: Devarshi Shah


I've gone through the Apache Kafka Documentation. It does not contain 
information about the support of underlying storage type, whether Kafka 
supports block storage, Network File System (NFS). On the internet, I could 
find that it supports NFS, however most of them summarize not to use NFS in 
Production. May we get proper information whether Kafka recommends NFS, or it 
doesn't support NFS to begin with?



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[VOTE] KIP-842: Add richer group offset reset mechanisms

2022-06-15 Thread hudeqi
Hi all,

I'd like to start a vote on KIP-842to add some group offset reset mechanisms. 
Details can be found here: https://cwiki.apache.org/confluence/x/xhyhD

Any feedback is appreciated.

Thank you.

hudeqi




[jira] [Created] (KAFKA-13994) Incorrect quota calculation due to a bug

2022-06-15 Thread Divij Vaidya (Jira)
Divij Vaidya created KAFKA-13994:


 Summary: Incorrect quota calculation due to a bug
 Key: KAFKA-13994
 URL: https://issues.apache.org/jira/browse/KAFKA-13994
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Divij Vaidya


*Problem*
This was noted by [~tombentley] at 
[https://github.com/apache/kafka/pull/12045#discussion_r895592286] 

The completion of a sample window in `SampledStat.java` is based on comparison 
of `

recordingTimeMs` with startTimeOfPreviousWindow [1]. `recordingTimeMs` is 
calculated as a System.currentTimeMillis which:
1. is not guaranteed to be monotonically increasing due to clock drifts. 
2. is not necessarily the current time when it arrives at [1] because the 
thread may be blocked at `synchronized` at {{{}Sensor.recordInternal [2]{}}}, 
because synchronized provides no guarantee about fairness for blocked threads.

Hence, it is possible that when isComplete comparison is made at [1], 
recordingTimeMs < endTimeOfCurrentWindow whereas the wallClockTimeAtTheMoment > 
 startTimeOfCurrentWindow + window length.

The implication of this would be:
1. The current sample window will not be considered completed even if it has 
completed as per wall clock time.
2. The value will be recorded in a sample window which has elapsed instead of a 
new window where it belongs.

Due to the above two implications, the metrics captured by the sensor may not 
be correct which could lead to incorrect quota calculations.

 [1] 
[https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/metrics/stats/SampledStat.java#L138]
 

 [2] 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/metrics/Sensor.java#L232



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (KAFKA-13993) Large log.cleaner.buffer.size config breaks Kafka Broker

2022-06-15 Thread Tomohiro Hashidate (Jira)
Tomohiro Hashidate created KAFKA-13993:
--

 Summary: Large log.cleaner.buffer.size config breaks Kafka Broker
 Key: KAFKA-13993
 URL: https://issues.apache.org/jira/browse/KAFKA-13993
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 3.1.1, 3.2.0, 3.0.1, 2.8.1, 2.7.2
Reporter: Tomohiro Hashidate


LogCleaner build a Cleaner instance with following way.

 

```

{color:#cc7832}val {color}{color:#9876aa}cleaner {color}= {color:#cc7832}new 
{color}Cleaner(id = threadId{color:#cc7832},
{color}{color:#cc7832} {color}offsetMap = {color:#cc7832}new 
{color}SkimpyOffsetMap(memory = 
math.min({color:#9876aa}config{color}.dedupeBufferSize / 
{color:#9876aa}config{color}.numThreads{color:#cc7832}, 
Int{color}.{color:#9876aa}MaxValue{color}).toInt{color:#cc7832},
{color}{color:#cc7832} {color}hashAlgorithm = 
{color:#9876aa}config{color}.hashAlgorithm){color:#cc7832},
{color}{color:#cc7832} {color}ioBufferSize = 
{color:#9876aa}config{color}.ioBufferSize / 
{color:#9876aa}config{color}.numThreads / 
{color:#6897bb}2{color}{color:#cc7832},
{color}{color:#cc7832} {color}maxIoBufferSize = 
{color:#9876aa}config{color}.maxMessageSize{color:#cc7832},
{color}{color:#cc7832} {color}dupBufferLoadFactor = 
{color:#9876aa}config{color}.dedupeBufferLoadFactor{color:#cc7832},
{color}{color:#cc7832} {color}throttler = 
{color:#9876aa}throttler{color}{color:#cc7832},
{color}{color:#cc7832} {color}time = time{color:#cc7832},
{color}{color:#cc7832} {color}checkDone = checkDone)

```

If `log.cleaner.buffer.size` / `log.cleaner.threads` is larger than 
Int.MaxValue, SkimpyOffsetMap uses Int.MaxValue.

And SkimpyOffsetMap try to allocates ByteBuffer that has Int.MaxValue capacity.

But, in the implmentation of Hotspot VM, the maximum array size is Int.MaxValue 
- 5.

Accoring to ArraysSupport in OpenJDK, SOFT_MAX_ARRAY_LENGTH is Int.MaxValue - 8 
(This is more safety).

 

If ByteBuffer capacity exceeds the maximum array length, Kafka Broker failed to 
start.

 

```

[2022-06-14 18:08:09,609] ERROR [KafkaServer id=1] Fatal error during 
KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
        at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
        at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
        at kafka.log.SkimpyOffsetMap.(OffsetMap.scala:45)
        at kafka.log.LogCleaner$CleanerThread.(LogCleaner.scala:300)
        at kafka.log.LogCleaner.$anonfun$startup$2(LogCleaner.scala:155)
        at kafka.log.LogCleaner.startup(LogCleaner.scala:154)
        at kafka.log.LogManager.startup(LogManager.scala:435)
        at kafka.server.KafkaServer.startup(KafkaServer.scala:291)
        at 
kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
        at kafka.Kafka$.main(Kafka.scala:82)
        at kafka.Kafka.main(Kafka.scala)

```

 

I suggest to use `Int.MaxValue - 8`instead of `Int.MaxValue`.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


Re: [VOTE] KIP-831: Add metric for log recovery progress

2022-06-15 Thread Luke Chen
Hi all,

The vote passes with:
6 +1 (non-bingin) from Ziming, Divij, James, Raman, Federico Valeri, Yu
Kvicii
3 +1 (binding) from Tom Bentley, Mickael Maison, and Jun Rao

Thanks for the vote!

Luke

On Tue, Jun 14, 2022 at 12:11 AM Jun Rao  wrote:

> Hi, Luke,
>
> Thanks for the KIP. +1 from me.
>
> Jun
>
> On Mon, Jun 13, 2022 at 8:04 AM Mickael Maison 
> wrote:
>
> > +1 (binding)
> >
> > Thanks for the KIP!
> >
> > Mickael
> >
> > On Sun, Jun 12, 2022 at 5:26 PM Yu Kvicii  wrote:
> > >
> > > +1 non binding. Thanks
> > >
> > >
> > > > On May 16, 2022, at 15:11, Luke Chen  wrote:
> > > >
> > > > Hi all,
> > > >
> > > > I'd like to start a vote on KIP to expose metrics for log recovery
> > > > progress. These metrics would let the admins have a way to monitor
> the
> > log
> > > > recovery progress.
> > > >
> > > > Details can be found here:
> > > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-831%3A+Add+metric+for+log+recovery+progress
> > > >
> > > > Any feedback is appreciated.
> > > >
> > > > Thank you.
> > > > Luke
> > >
> >
>