[jira] [Resolved] (KAFKA-16180) Full metadata request sometimes fails during zk migration
[ https://issues.apache.org/jira/browse/KAFKA-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Arthur resolved KAFKA-16180. -- Resolution: Fixed > Full metadata request sometimes fails during zk migration > - > > Key: KAFKA-16180 > URL: https://issues.apache.org/jira/browse/KAFKA-16180 > Project: Kafka > Issue Type: Bug >Affects Versions: 3.7.0 >Reporter: Colin McCabe >Priority: Blocker > Fix For: 3.6.2, 3.7.0 > > > Example: > {code:java} > java.util.NoSuchElementException: topic_name > at > scala.collection.mutable.AnyRefMap$ExceptionDefault.apply(AnyRefMap.scala:508) > at > scala.collection.mutable.AnyRefMap$ExceptionDefault.apply(AnyRefMap.scala:507) > at scala.collection.mutable.AnyRefMap.apply(AnyRefMap.scala:207) > at > kafka.server.metadata.ZkMetadataCache$.$anonfun$maybeInjectDeletedPartitionsFromFullMetadataRequest$2(ZkMetadataCache.scala:112) > at > kafka.server.metadata.ZkMetadataCache$.$anonfun$maybeInjectDeletedPartitionsFromFullMetadataRequest$2$adapted(ZkMetadataCache.scala:105) > at scala.collection.immutable.HashSet.foreach(HashSet.scala:958) > at > kafka.server.metadata.ZkMetadataCache$.maybeInjectDeletedPartitionsFromFullMetadataRequest(ZkMetadataCache.scala:105) > at > kafka.server.metadata.ZkMetadataCache.$anonfun$updateMetadata$1(ZkMetadataCache.scala:506) > at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:183) > at > kafka.server.metadata.ZkMetadataCache.updateMetadata(ZkMetadataCache.scala:496) > at > kafka.server.ReplicaManager.maybeUpdateMetadataCache(ReplicaManager.scala:2482) > at > kafka.server.KafkaApis.handleUpdateMetadataRequest(KafkaApis.scala:733) > at kafka.server.KafkaApis.handle(KafkaApis.scala:349) > at > kafka.server.KafkaRequestHandler.$anonfun$poll$8(KafkaRequestHandler.scala:210) > at > kafka.server.KafkaRequestHandler.$anonfun$poll$8$adapted(KafkaRequestHandler.scala:210) > at > io.confluent.kafka.availability.ThreadCountersManager.wrapEngine(ThreadCountersManager.java:146) > at > kafka.server.KafkaRequestHandler.poll(KafkaRequestHandler.scala:210) > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:151) > at java.base/java.lang.Thread.run(Thread.java:1583) > at org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:66) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-16180) Full metadata request sometimes fails during zk migration
[ https://issues.apache.org/jira/browse/KAFKA-16180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Arthur updated KAFKA-16180: - Fix Version/s: 3.6.2 3.7.0 > Full metadata request sometimes fails during zk migration > - > > Key: KAFKA-16180 > URL: https://issues.apache.org/jira/browse/KAFKA-16180 > Project: Kafka > Issue Type: Bug >Affects Versions: 3.7.0 >Reporter: Colin McCabe >Priority: Blocker > Fix For: 3.7.0, 3.6.2 > > > Example: > {code:java} > java.util.NoSuchElementException: topic_name > at > scala.collection.mutable.AnyRefMap$ExceptionDefault.apply(AnyRefMap.scala:508) > at > scala.collection.mutable.AnyRefMap$ExceptionDefault.apply(AnyRefMap.scala:507) > at scala.collection.mutable.AnyRefMap.apply(AnyRefMap.scala:207) > at > kafka.server.metadata.ZkMetadataCache$.$anonfun$maybeInjectDeletedPartitionsFromFullMetadataRequest$2(ZkMetadataCache.scala:112) > at > kafka.server.metadata.ZkMetadataCache$.$anonfun$maybeInjectDeletedPartitionsFromFullMetadataRequest$2$adapted(ZkMetadataCache.scala:105) > at scala.collection.immutable.HashSet.foreach(HashSet.scala:958) > at > kafka.server.metadata.ZkMetadataCache$.maybeInjectDeletedPartitionsFromFullMetadataRequest(ZkMetadataCache.scala:105) > at > kafka.server.metadata.ZkMetadataCache.$anonfun$updateMetadata$1(ZkMetadataCache.scala:506) > at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:183) > at > kafka.server.metadata.ZkMetadataCache.updateMetadata(ZkMetadataCache.scala:496) > at > kafka.server.ReplicaManager.maybeUpdateMetadataCache(ReplicaManager.scala:2482) > at > kafka.server.KafkaApis.handleUpdateMetadataRequest(KafkaApis.scala:733) > at kafka.server.KafkaApis.handle(KafkaApis.scala:349) > at > kafka.server.KafkaRequestHandler.$anonfun$poll$8(KafkaRequestHandler.scala:210) > at > kafka.server.KafkaRequestHandler.$anonfun$poll$8$adapted(KafkaRequestHandler.scala:210) > at > io.confluent.kafka.availability.ThreadCountersManager.wrapEngine(ThreadCountersManager.java:146) > at > kafka.server.KafkaRequestHandler.poll(KafkaRequestHandler.scala:210) > at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:151) > at java.base/java.lang.Thread.run(Thread.java:1583) > at org.apache.kafka.common.utils.KafkaThread.run(KafkaThread.java:66) > {code} -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Updated] (KAFKA-16372) max.block.ms behavior inconsistency with javadoc and the config description
[ https://issues.apache.org/jira/browse/KAFKA-16372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haruki Okada updated KAFKA-16372: - Description: As of Kafka 3.7.0, the javadoc of [KafkaProducer.send|https://github.com/apache/kafka/blob/3.7.0/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L956] states that it throws TimeoutException when max.block.ms is exceeded on buffer allocation or initial metadata fetch. Also it's stated in [buffer.memory config description|https://kafka.apache.org/37/documentation.html#producerconfigs_buffer.memory]. However, I found that this is not true because TimeoutException extends ApiException, and KafkaProducer.doSend catches ApiException and [wraps it as FutureFailure|https://github.com/apache/kafka/blob/3.7.0/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L1075-L1086] instead of throwing it. I wonder if this is a bug or the documentation error. Seems this discrepancy exists since 0.9.0.0, which max.block.ms is introduced. was: As of Kafka 3.7.0, the javadoc of [KafkaProducer.send|https://github.com/apache/kafka/blob/3.7.0/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L956] states that it throws TimeoutException when max.block.ms is exceeded on buffer allocation or initial metadata fetch. Also it's stated in [max.block.ms config description|https://kafka.apache.org/37/documentation.html#producerconfigs_buffer.memory]. However, I found that this is not true because TimeoutException extends ApiException, and KafkaProducer.doSend catches ApiException and [wraps it as FutureFailure|https://github.com/apache/kafka/blob/3.7.0/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L1075-L1086] instead of throwing it. I wonder if this is a bug or the documentation error. Seems this discrepancy exists since 0.9.0.0, which max.block.ms is introduced. > max.block.ms behavior inconsistency with javadoc and the config description > --- > > Key: KAFKA-16372 > URL: https://issues.apache.org/jira/browse/KAFKA-16372 > Project: Kafka > Issue Type: Bug > Components: producer >Reporter: Haruki Okada >Priority: Minor > > As of Kafka 3.7.0, the javadoc of > [KafkaProducer.send|https://github.com/apache/kafka/blob/3.7.0/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L956] > states that it throws TimeoutException when max.block.ms is exceeded on > buffer allocation or initial metadata fetch. > Also it's stated in [buffer.memory config > description|https://kafka.apache.org/37/documentation.html#producerconfigs_buffer.memory]. > However, I found that this is not true because TimeoutException extends > ApiException, and KafkaProducer.doSend catches ApiException and [wraps it as > FutureFailure|https://github.com/apache/kafka/blob/3.7.0/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L1075-L1086] > instead of throwing it. > I wonder if this is a bug or the documentation error. > Seems this discrepancy exists since 0.9.0.0, which max.block.ms is introduced. -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-16369) Broker may not shut down when SocketServer fails to bind as Address already in use
[ https://issues.apache.org/jira/browse/KAFKA-16369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17827107#comment-17827107 ] Edoardo Comar commented on KAFKA-16369: --- PR is ready for review - can anyone take a look please ? > Broker may not shut down when SocketServer fails to bind as Address already > in use > -- > > Key: KAFKA-16369 > URL: https://issues.apache.org/jira/browse/KAFKA-16369 > Project: Kafka > Issue Type: Bug >Affects Versions: 3.7.0, 3.6.1, 3.8.0 >Reporter: Edoardo Comar >Assignee: Edoardo Comar >Priority: Major > Attachments: kraft-server.log, server.log > > > When in Zookeeper mode, if a port the broker should listen to is already bound > the KafkaException: Socket server failed to bind to localhost:9092: Address > already in use. > is thrown but the Broker continues to startup . > It correctly shuts down when in KRaft mode. > Easy to reproduce when in Zookeper mode with server.config set to listen to > localhost only > {color:#00}listeners={color}{color:#a31515}PLAINTEXT://localhost:9092{color} > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-16369: Broker may not shut down when SocketServer fails to bind as Address already in use [kafka]
edoardocomar commented on PR #15530: URL: https://github.com/apache/kafka/pull/15530#issuecomment-1997572827 PR is ready for review, can anyone please take a look? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Closed] (KAFKA-16227) Console consumer fails with `IllegalStateException`
[ https://issues.apache.org/jira/browse/KAFKA-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bruno Cadonna closed KAFKA-16227. - > Console consumer fails with `IllegalStateException` > --- > > Key: KAFKA-16227 > URL: https://issues.apache.org/jira/browse/KAFKA-16227 > Project: Kafka > Issue Type: Sub-task > Components: clients, consumer >Affects Versions: 3.7.0 >Reporter: David Jacot >Assignee: Bruno Cadonna >Priority: Blocker > Labels: kip-848-client-support > Fix For: 3.8.0 > > > I have seen a few occurrences like the following one. There is a race between > the background thread and the foreground thread. I imagine the following > steps: > * quickstart-events-2 is assigned by the background thread; > * the foreground thread starts the initialization of the partition (e.g. > reset offset); > * quickstart-events-2 is removed by the background thread; > * the initialization completes and quickstart-events-2 does not exist > anymore. > > {code:java} > [2024-02-06 16:21:57,375] ERROR Error processing message, terminating > consumer process: (kafka.tools.ConsoleConsumer$) > java.lang.IllegalStateException: No current assignment for partition > quickstart-events-2 > at > org.apache.kafka.clients.consumer.internals.SubscriptionState.assignedState(SubscriptionState.java:367) > at > org.apache.kafka.clients.consumer.internals.SubscriptionState.updateHighWatermark(SubscriptionState.java:579) > at > org.apache.kafka.clients.consumer.internals.FetchCollector.handleInitializeSuccess(FetchCollector.java:283) > at > org.apache.kafka.clients.consumer.internals.FetchCollector.initialize(FetchCollector.java:226) > at > org.apache.kafka.clients.consumer.internals.FetchCollector.collectFetch(FetchCollector.java:110) > at > org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer.collectFetch(AsyncKafkaConsumer.java:1540) > at > org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer.pollForFetches(AsyncKafkaConsumer.java:1525) > at > org.apache.kafka.clients.consumer.internals.AsyncKafkaConsumer.poll(AsyncKafkaConsumer.java:711) > at > org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:874) > at > kafka.tools.ConsoleConsumer$ConsumerWrapper.receive(ConsoleConsumer.scala:473) > at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:103) > at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:77) > at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54) > at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala) {code} > > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] Metadata schema checker [kafka]
mannoopj commented on code in PR #14389: URL: https://github.com/apache/kafka/pull/14389#discussion_r1524953366 ## tools/src/main/java/org/apache/kafka/tools/SchemaChecker/MetadataSchemaChecker.java: ## @@ -0,0 +1,359 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + *http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.kafka.tools.SchemaChecker; + + + +import com.fasterxml.jackson.databind.JsonNode; +import com.fasterxml.jackson.databind.ObjectMapper; +import com.fasterxml.jackson.databind.node.ArrayNode; +import org.eclipse.jgit.api.Git; +import org.eclipse.jgit.api.errors.GitAPIException; + +import org.eclipse.jgit.lib.ObjectId; +import org.eclipse.jgit.lib.ObjectLoader; +import org.eclipse.jgit.lib.Ref; +import org.eclipse.jgit.lib.Repository; +import org.eclipse.jgit.revwalk.RevCommit; +import org.eclipse.jgit.revwalk.RevTree; +import org.eclipse.jgit.revwalk.RevWalk; +import org.eclipse.jgit.treewalk.TreeWalk; +import org.eclipse.jgit.treewalk.filter.PathFilter; + + +import java.io.BufferedReader; +import java.io.File; +import java.io.FileNotFoundException; +import java.io.FileReader; +import java.io.IOException; +import java.util.ArrayList; +import java.util.Arrays; +import java.util.Iterator; +import java.util.List; +import java.util.Objects; + + +public class MetadataSchemaChecker { + +static int latestTag = -1; +static int latestTagVersion = -1; +static int oldLatestVersion = -1; +static int oldFirstVersion = -1; +static int newLatestVersion = -1; +static int newFirstVersion = -1; +static String[] filesCheckMetadata = new File(System.getProperty("user.dir") + "/metadata/src/main/resources/common/metadata/").list(); +public static void main(String[] args) throws Exception { + +try { +List localContent = new ArrayList<>(); +for(String fileName: filesCheckMetadata) { +final String dir = System.getProperty("user.dir"); +String path = dir + "/metadata/src/main/resources/common/metadata/" + fileName; +BufferedReader reader = new BufferedReader(new FileReader(path)); +for (int i = 0; i < 15; i++) { +reader.readLine(); +} +StringBuilder content = new StringBuilder(); +boolean print = false; +for (String line = reader.readLine(); line != null; line = reader.readLine()) { +if (line.charAt(0) == '{') { +print = true; +} +if (print && !line.contains("//")) { +content.append(line); +} +} +localContent.add(content.toString()); +} + +List gitContent = GetDataFromGit(); +if (localContent.size() != gitContent.size()) { +throw new IllegalStateException("missing schemas"); +} +for(int i = 0; i < localContent.size(); i++) { +if (Objects.equals(localContent.get(i), gitContent.get(i))) { +continue; +} + +ObjectMapper objectMapper = new ObjectMapper(); +JsonNode jsonNode1 = objectMapper.readTree(gitContent.get(i)); +JsonNode jsonNode2 = objectMapper.readTree(localContent.get(i)); + +checkApiTypeVersions(jsonNode1, jsonNode2); +parser((ArrayNode) jsonNode1.get("fields"), (ArrayNode) jsonNode2.get("fields")); +} + +} catch (IOException e) { +throw new RuntimeException(e); +} +} + +private static List GetDataFromGit() throws IOException, GitAPIException { +List gitSchemas = new ArrayList<>(); + +Git git = Git.open(new File(System.getProperty("user.dir") + "/.git")); +Repository repository = git.getRepository(); +Ref head = git.getRepository().getRefDatabase().firstExactRef("refs/heads/trunk"); + +try (RevWalk revWalk = new RevWalk(repository)) { +RevCommit commit = revWalk.parseCommit(head.getObjectId()); +RevTree tree = commit.getTree(); +for (String fileName : filesCheckMetadata
[jira] [Resolved] (KAFKA-16226) Java client: Performance regression in Trogdor benchmark with high partition counts
[ https://issues.apache.org/jira/browse/KAFKA-16226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ismael Juma resolved KAFKA-16226. - Resolution: Fixed > Java client: Performance regression in Trogdor benchmark with high partition > counts > --- > > Key: KAFKA-16226 > URL: https://issues.apache.org/jira/browse/KAFKA-16226 > Project: Kafka > Issue Type: Bug > Components: clients >Affects Versions: 3.7.0, 3.6.1 >Reporter: Mayank Shekhar Narula >Assignee: Mayank Shekhar Narula >Priority: Major > Labels: kip-951 > Fix For: 3.6.2, 3.8.0, 3.7.1 > > Attachments: baseline_lock_profile.png, kafka_15415_lock_profile.png > > > h1. Background > https://issues.apache.org/jira/browse/KAFKA-15415 implemented optimisation in > java-client to skip backoff period if client knows of a newer leader, for > produce-batch being retried. > h1. What changed > The implementation introduced a regression noticed on a trogdor-benchmark > running with high partition counts(36000!). > With regression, following metrics changed on the produce side. > # record-queue-time-avg: increased from 20ms to 30ms. > # request-latency-avg: increased from 50ms to 100ms. > h1. Why it happened > As can be seen from the original > [PR|https://github.com/apache/kafka/pull/14384] > RecordAccmulator.partitionReady() & drainBatchesForOneNode() started using > synchronised method Metadata.currentLeader(). This has led to increased > synchronization between KafkaProducer's application-thread that call send(), > and background-thread that actively send producer-batches to leaders. > Lock profiles clearly show increased synchronisation in KAFKA-15415 > PR(highlighted in {color:#de350b}Red{color}) Vs baseline ( see below ). Note > the synchronisation is much worse for paritionReady() in this benchmark as > its called for each partition, and it has 36k partitions! > h3. Lock Profile: Kafka-15415 > !kafka_15415_lock_profile.png! > h3. Lock Profile: Baseline > !baseline_lock_profile.png! > h1. Fix > Synchronization has to be reduced between 2 threads in order to address this. > [https://github.com/apache/kafka/pull/15323] is a fix for it, as it avoids > using Metadata.currentLeader() instead rely on Cluster.leaderFor(). > With the fix, lock-profile & metrics are similar to baseline. > -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] KAFKA-16226; Reduce synchronization between producer threads (#15323) [kafka]
ijuma merged PR #15498: URL: https://github.com/apache/kafka/pull/15498 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-16226; Reduce synchronization between producer threads (#15323) [kafka]
ijuma merged PR #15493: URL: https://github.com/apache/kafka/pull/15493 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-16226; Reduce synchronization between producer threads (#15323) [kafka]
ijuma commented on PR #15493: URL: https://github.com/apache/kafka/pull/15493#issuecomment-1997497460 Test failures are the same as the 3.7 branch. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[PR] KAFKA-16374; High watermark updates should have a higher priority [kafka]
dajac opened a new pull request, #15534: URL: https://github.com/apache/kafka/pull/15534 When the group coordinator is under heavy load, the current mechanism to release pending events based on updated high watermark, which consist in pushing an event at the end of the queue, is bad because pending events pay the cost of the queue twice. A first time for the handling of the first event and a second time for the handling of the hwm update. This patch changes this logic to push the hwm update event to the front of the queue in order to release pending events as soon as as possible. ### Committer Checklist (excluded from commit message) - [ ] Verify design and implementation - [ ] Verify test coverage and CI build status - [ ] Verify documentation (including upgrade notes) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-16374) High watermark updates should have a higher priority
David Jacot created KAFKA-16374: --- Summary: High watermark updates should have a higher priority Key: KAFKA-16374 URL: https://issues.apache.org/jira/browse/KAFKA-16374 Project: Kafka Issue Type: Sub-task Reporter: David Jacot Assignee: David Jacot -- This message was sent by Atlassian Jira (v8.20.10#820010)
[jira] [Commented] (KAFKA-14048) The Next Generation of the Consumer Rebalance Protocol
[ https://issues.apache.org/jira/browse/KAFKA-14048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17827070#comment-17827070 ] David Jacot commented on KAFKA-14048: - [~aratz_ws] We are working with the current timeline in mind: Preview in AK 3.8 and GA in AK 4.0 (without the client side assignor). > The Next Generation of the Consumer Rebalance Protocol > -- > > Key: KAFKA-14048 > URL: https://issues.apache.org/jira/browse/KAFKA-14048 > Project: Kafka > Issue Type: Improvement >Reporter: David Jacot >Assignee: David Jacot >Priority: Major > > This Jira tracks the development of KIP-848: > https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol. -- This message was sent by Atlassian Jira (v8.20.10#820010)
Re: [PR] MINOR: change KStream processor names [kafka]
b-goyal commented on PR #587: URL: https://github.com/apache/kafka/pull/587#issuecomment-1997293925 @guozhangwang -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Fix compiler error in `KafkaLog4jAppender` [kafka]
b-goyal commented on PR #368: URL: https://github.com/apache/kafka/pull/368#issuecomment-1997293944 The branch wasn't rebased after the capitalisation fix for SSL classes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2737: Added single- and multi-consumer integration tests for round-robin assignment [kafka]
b-goyal commented on PR #413: URL: https://github.com/apache/kafka/pull/413#issuecomment-1997293898 Two tests: 1. One consumer subscribes to 2 topics, each with 2 partitions; includes adding and removing a topic. 2. Several consumers subscribe to 2 topics, several partition each; includes adding one more consumer after initial assignment is done and verified. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2738: Replica FetcherThread should connect to leader endpoint m… [kafka]
b-goyal commented on PR #428: URL: https://github.com/apache/kafka/pull/428#issuecomment-1997293955 …atching its inter-broker security protocol -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2796 add support for reassignment partition to specified logdir [kafka]
b-goyal commented on PR #484: URL: https://github.com/apache/kafka/pull/484#issuecomment-1997293906 Currently when creating a log, the directory is chosen by calculating the number of partitions in each directory and then choosing the data directory with the fewest partitions. However, the sizes of different TopicParitions are very different, which lead to usage vary greatly between different logDirs. And usually each logDir corresponds to a disk, so the disk usage between different disks is very imbalance . The possible solution is to reassign partitions in high-usage logDirs to low-usage logDirs. I change the format of /admin/reassign_partitions,add replicaDirs field. At reassigning Partitions, when broker’s LogManager.createLog() is invoked , if replicaDir is specified , the specified logDir will be chosen, otherwise the logDir with the fewest partitions will be chosen. the old /admin/reassign_partitions: ``` {"version":1, "partitions": [ { "topic" : "Foo", "partition": 1, "replicas": [1, 2, 3] } ] } ``` the new /admin/reassign_partitions: ``` {"version":1, "partitions": [ { "topic" : "Foo", "partition": 1, "replicas": [1, 2, 3], "replicaDirs": {"1":"/data1/kafka_data", "3":"/data10/kakfa_data" } } ] } ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Remove unreachable if check [kafka]
b-goyal commented on PR #396: URL: https://github.com/apache/kafka/pull/396#issuecomment-1997293797 @gwenshap @granders could you guys take a look at this trivial change. This piece makes one think that for SSL, not passing `new_consumer=True` should be fine. It was fine until recently, before https://github.com/apache/kafka/commit/e6b343302f3208f7f6e0099fe2a7132ef9eaaafb. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2295; Support given for dynamically loaded classes (encoders, e… [kafka]
b-goyal commented on PR #314: URL: https://github.com/apache/kafka/pull/314#issuecomment-1997293825 Rebased code.. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2698: Add paused() method to o.a.k.c.c.Consumer [kafka]
b-goyal commented on PR #403: URL: https://github.com/apache/kafka/pull/403#issuecomment-1997293838 As per KAFKA-2698, this adds a `paused()` method to the Consumer interface such that client code can query Consumer implementations for paused partitions. Somewhat new to the code base but I understand this may require a KIP given this changes APIs: is this required even for backward-compatible changes like this? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Kafka 2902 streaming config use get base consumer configs [kafka]
b-goyal commented on PR #596: URL: https://github.com/apache/kafka/pull/596#issuecomment-1997293747 Changes made for using getBaseConsumerConfigs from StreamingConfig.getConsumerConfigs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2361: unit test failure in ProducerFailureHandlingTest.testNotE… [kafka]
b-goyal commented on PR #571: URL: https://github.com/apache/kafka/pull/571#issuecomment-1997293697 same issue with KAFKA-1999, so I want to fix it https://issues.apache.org/jira/browse/KAFKA-1999 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: update to correct clock skew [kafka]
b-goyal commented on PR #383: URL: https://github.com/apache/kafka/pull/383#issuecomment-1997293673 @ewencp Updated the provisioning script to install ntp daemon. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] modify config specification of topic level [kafka]
b-goyal commented on PR #612: URL: https://github.com/apache/kafka/pull/612#issuecomment-1997293618 topic level config delete config options use --delete-config instead of --deleteConfig -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Add new build target for system test libs [kafka]
b-goyal commented on PR #361: URL: https://github.com/apache/kafka/pull/361#issuecomment-1997293781 KAFKA-2644 adds MiniKdc for system tests and hence needs a target to collect all MiniKdc jars. At the moment, system tests run `gradlew jar`. Replacing that with `gradlew systemTestLibs` will enable kafka jars and test dependency jars to be built and copied into appropriate locations. Submitting this as a separate PR so that the new target can be added to the build scripts that run system tests before KAFKA-2644 is committed. A separate target for system test artifacts will allow dependency changes to be made in future without breaking test runs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2718: Add logging to investigate intermittent unit test failures [kafka]
b-goyal commented on PR #613: URL: https://github.com/apache/kafka/pull/613#issuecomment-1997293596 Print port and directories used by zookeeper in unit tests to figure out which may be causing conflict. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2769: Multi-consumer integration tests for consumer assignment incl. session timeouts and corresponding fixes [kafka]
b-goyal commented on PR #472: URL: https://github.com/apache/kafka/pull/472#issuecomment-1997293531 -- Refactored multi-consumer integration group assignment validation tests for round-robin assignment -- Added multi-consumer integration tests for session timeout expiration: 1. When a consumer stops polling 2. When a consumer calls close() -- Fixes to issues found with session timeout expiration tests woth help from Jason Gustafson: Try to avoid SendFailedException exception by cancelling the scheduled tasks and ensuring metadata update before sending group leave requests + send leave group request with retries. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2718: Avoid reusing temporary directories in core unit tests [kafka]
b-goyal commented on PR #399: URL: https://github.com/apache/kafka/pull/399#issuecomment-1997293731 Retry to find new directory and cleanup on exit. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2825, KAFKA-2851: Controller failover tests added to ducktape replication tests and fix to temp dir [kafka]
b-goyal commented on PR #570: URL: https://github.com/apache/kafka/pull/570#issuecomment-1997293532 I closed an original pull request that contained previous comments by Geoff (which are already addressed here), because I got into bad rebase situation. So, I created a new branch and cherry-picked my commits + merged with Ben's changes to fix MiniKDC tests to run on Virtual Box. That change was conflicting with my changes, where I was copying MiniKDC files with new scp method, and temp file was created inside that method. To merge Ben's changes, I added two optional parameters to scp(): 'pattern' and 'subst' to optionally substitute string while spp'ing files, which is needed for krb5.conf file. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Modified the async producer so it re-queues failed batches. [kafka]
b-goyal commented on PR #7: URL: https://github.com/apache/kafka/pull/7#issuecomment-1997291739 I'm working on an application that needs the throughput offered by an async producer but also needs to handle send failures gracefully like a sync producer. I modified the ProducerSendThread so it will re-queue failed batches. This allowed me to determine the behavior I wanted from my producer with the queue config parameters. A "queue.enqueue.timeout.ms=0" allowed me to get runtime exceptions when sends failed often enough to fill the queue. This also allowed me to use "queue.buffering.max.messages" to control how tolerant the application is to network blips. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2847; Remove principal builder class from client configs [kafka]
b-goyal commented on PR #542: URL: https://github.com/apache/kafka/pull/542#issuecomment-1997293684 Also mark `PrincipalBuilder` as `Unstable` and tweak docs. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Modified the async producer so it re-queues failed batches. [kafka]
b-goyal commented on PR #7: URL: https://github.com/apache/kafka/pull/7#issuecomment-1997291740 I'm working on an application that needs the throughput offered by an async producer but also needs to handle send failures gracefully like a sync producer. I modified the ProducerSendThread so it will re-queue failed batches. This allowed me to determine the behavior I wanted from my producer with the queue config parameters. A "queue.enqueue.timeout.ms=0" allowed me to get runtime exceptions when sends failed often enough to fill the queue. This also allowed me to use "queue.buffering.max.messages" to control how tolerant the application is to network blips. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HOTFIX: correct sourceNodes for kstream.through() [kafka]
b-goyal commented on PR #374: URL: https://github.com/apache/kafka/pull/374#issuecomment-1997293633 @guozhangwang -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] ConsumerConfig now expects "bootstrap.servers" in config. [kafka]
b-goyal commented on PR #51: URL: https://github.com/apache/kafka/pull/51#issuecomment-1997291755 Small typo to have the documentation in line with `KafkaProducer.java`. Kudos on the awesome documentation work! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Unmap before resizing [kafka]
b-goyal commented on PR #6: URL: https://github.com/apache/kafka/pull/6#issuecomment-1997291728 While I was studying how MappedByteBuffer works, I saw a sharing runtime exception on Windows. I applied what I learned to generate a patch which uses an internal open JDK API to solve this problem. --- Caused by: java.io.IOException: The requested operation cannot be performed on a file with a user-mapped section open at java.io.RandomAccessFile.setLength(Native Method) at kafka.log.OffsetIndex.liftedTree2$1(OffsetIndex.scala:263) at kafka.log.OffsetIndex.resize(OffsetIndex.scala:262) at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:247) at kafka.log.Log.rollToOffset(Log.scala:518) at kafka.log.Log.roll(Log.scala:502) at kafka.log.Log.maybeRoll(Log.scala:484) at kafka.log.Log.append(Log.scala:297) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2719: Use wildcard classpath for dependant-libs [kafka]
b-goyal commented on PR #400: URL: https://github.com/apache/kafka/pull/400#issuecomment-1997293586 PR switches to wildcard classpath for dependant libs to restrict the length of classpath, thereby reducing command line length. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] commitOffsets can be passed the offsets to commit [kafka]
b-goyal commented on PR #10: URL: https://github.com/apache/kafka/pull/10#issuecomment-1997291772 This adds another version of `commitOffsets` that takes the offsets to commit as a parameter. Without this change, getting correct user code is very hard. Despite kafka's at-least-once guarantees, most user code doesn't actually have that guarantee, and is almost certainly wrong if doing batch processing. Getting it right requires some very careful synchronization between all consumer threads, which is both: 1) painful to get right 2) slow b/c of the need to stop all workers during a commit. This small change simplifies a lot of this. This was discussed extensively on the user mailing list, on the thread "are kafka consumer apps guaranteed to see msgs at least once?" You can also see an example implementation of a user api which makes use of this, to get proper at-least-once guarantees by _user_ code, even for batches: https://github.com/quantifind/kafka-utils/pull/1 I'm open to any suggestions on how to add unit tests for this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Rack-Aware replica assignment option [kafka]
b-goyal commented on PR #14: URL: https://github.com/apache/kafka/pull/14#issuecomment-1997291760 Adding a rack-id to kafka config. This rack-id can be used during replica assignment by using the max-rack-replication argument in the admin scripts (create topic, etc.). By default the original replication assignment algorithm is used because max-rack-replication defaults to -1. max-rack-replication > -1 is not honored if you are doing manual replica assignment (preffered). If this looks good I can add some test cases specific to the rack-aware assignment. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] commitOffsets can be passed the offsets to commit [kafka]
b-goyal commented on PR #10: URL: https://github.com/apache/kafka/pull/10#issuecomment-1997291761 This adds another version of `commitOffsets` that takes the offsets to commit as a parameter. Without this change, getting correct user code is very hard. Despite kafka's at-least-once guarantees, most user code doesn't actually have that guarantee, and is almost certainly wrong if doing batch processing. Getting it right requires some very careful synchronization between all consumer threads, which is both: 1) painful to get right 2) slow b/c of the need to stop all workers during a commit. This small change simplifies a lot of this. This was discussed extensively on the user mailing list, on the thread "are kafka consumer apps guaranteed to see msgs at least once?" You can also see an example implementation of a user api which makes use of this, to get proper at-least-once guarantees by _user_ code, even for batches: https://github.com/quantifind/kafka-utils/pull/1 I'm open to any suggestions on how to add unit tests for this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Patch for KAFKA-2055: ConsumerBounceTest.testSeekAndCommitWithBrokerFail... [kafka]
b-goyal commented on PR #60: URL: https://github.com/apache/kafka/pull/60#issuecomment-1997291729 ...ures transient failure -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-948 : Update ReplicaStateMachine.scala [kafka]
b-goyal commented on PR #5: URL: https://github.com/apache/kafka/pull/5#issuecomment-1997291720 KAFKA-948 When the broker which is the leader for a partition is down, the ISR list in the LeaderAndISR path is updated. But if the broker , which is not a leader of the partition is down, the ISR list is not getting updated. This is an issue because ISR list contains the stale entry. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] zkclient and scalatest library updates [kafka]
b-goyal commented on PR #3: URL: https://github.com/apache/kafka/pull/3#issuecomment-1997291703 Following https://issues.apache.org/jira/browse/KAFKA-826 I forked the code and included fixes for 2 bugs I reported, https://issues.apache.org/jira/browse/KAFKA-807 and https://issues.apache.org/jira/browse/KAFKA-809 All tests pass except kafka.log.LogTest which fails on my Mac--I don't think it is related to the zkclient fix, I could be wrong. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Switch to using scala 2.9.2 [kafka]
b-goyal commented on PR #1: URL: https://github.com/apache/kafka/pull/1#issuecomment-1997291688 Compiled and used fine. I had issues with the tests though. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Switch to using scala 2.9.2 [kafka]
b-goyal commented on PR #1: URL: https://github.com/apache/kafka/pull/1#issuecomment-1997291690 Compiled and used fine. I had issues with the tests though. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2639: Refactoring of ZkUtils [kafka]
b-goyal commented on PR #303: URL: https://github.com/apache/kafka/pull/303#issuecomment-1997291631 I've split the work of KAFKA-1695 because this refactoring touches a large number of files. Most of the changes are trivial, but I feel it will be easier to review this way. This pull request includes the one @Parth-Brahmbhatt started to address KAFKA-1695. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2015: Enable ConsoleConsumer to use new consumer [kafka]
b-goyal commented on PR #144: URL: https://github.com/apache/kafka/pull/144#issuecomment-1997291590 This extends the original patch done by GZ to provide Console access to both the new and old consumer API's. The code follows a pattern similar to that already used in ConsoleProducer. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Exclude conflicting zookeeper version from 'com.101tec:zkclient' dependencies [kafka]
b-goyal commented on PR #162: URL: https://github.com/apache/kafka/pull/162#issuecomment-1997291552 'com.101tec:zkclient:0.5' package brings in a dependency on older zookeper version: `3.4.4` This causes conflicts if consumers of kafka jar are trying to use `maven-enforcer` plugin. This plugin ensures there are no conflicts in your dependency clojure. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-948 : Update ReplicaStateMachine.scala [kafka]
b-goyal commented on PR #5: URL: https://github.com/apache/kafka/pull/5#issuecomment-1997291716 KAFKA-948 When the broker which is the leader for a partition is down, the ISR list in the LeaderAndISR path is updated. But if the broker , which is not a leader of the partition is down, the ISR list is not getting updated. This is an issue because ISR list contains the stale entry. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Unmap before resizing [kafka]
b-goyal commented on PR #6: URL: https://github.com/apache/kafka/pull/6#issuecomment-1997291721 While I was studying how MappedByteBuffer works, I saw a sharing runtime exception on Windows. I applied what I learned to generate a patch which uses an internal open JDK API to solve this problem. --- Caused by: java.io.IOException: The requested operation cannot be performed on a file with a user-mapped section open at java.io.RandomAccessFile.setLength(Native Method) at kafka.log.OffsetIndex.liftedTree2$1(OffsetIndex.scala:263) at kafka.log.OffsetIndex.resize(OffsetIndex.scala:262) at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:247) at kafka.log.Log.rollToOffset(Log.scala:518) at kafka.log.Log.roll(Log.scala:502) at kafka.log.Log.maybeRoll(Log.scala:484) at kafka.log.Log.append(Log.scala:297) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-294 [kafka]
b-goyal commented on PR #2: URL: https://github.com/apache/kafka/pull/2#issuecomment-1997291692 This issue can be caused by a non-existing path but also a misunderstanding from the config file. A short example will help the user. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-1545: KafkaHealthcheck.register failure [kafka]
b-goyal commented on PR #48: URL: https://github.com/apache/kafka/pull/48#issuecomment-1997291682 KAFKA-1545: java.net.InetAddress.getLocalHost in KafkaHealthcheck.register may fail on some irregular hostnames -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2232: Make MockProducer generic [kafka]
b-goyal commented on PR #68: URL: https://github.com/apache/kafka/pull/68#issuecomment-1997291446 MockConsumer and MockProducer have been moved to test source set. KeySerializer and ValueSerializer have been added to mimic actual KafkaProducer behavior. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Corrected the Changes in ZkUtils.scala - KAFKA-2167 [kafka]
b-goyal commented on PR #63: URL: https://github.com/apache/kafka/pull/63#issuecomment-1997291423 Corrected Spelling errors. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] zkclient and scalatest library updates [kafka]
b-goyal commented on PR #3: URL: https://github.com/apache/kafka/pull/3#issuecomment-1997291705 Following https://issues.apache.org/jira/browse/KAFKA-826 I forked the code and included fixes for 2 bugs I reported, https://issues.apache.org/jira/browse/KAFKA-807 and https://issues.apache.org/jira/browse/KAFKA-809 All tests pass except kafka.log.LogTest which fails on my Mac--I don't think it is related to the zkclient fix, I could be wrong. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-1370 Added Gradle startup script for Windows [kafka]
b-goyal commented on PR #22: URL: https://github.com/apache/kafka/pull/22#issuecomment-1997291413 This patch adds Gradle startup script for Windows -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Reducing code duplication in DefaultEventHandler [kafka]
b-goyal commented on PR #125: URL: https://github.com/apache/kafka/pull/125#issuecomment-1997291643 No functionality was changed -- just removed code duplication & [get advantage of case class copy method](http://stackoverflow.com/questions/7249396/how-to-clone-a-case-class-instance-and-change-just-one-field-in-scala). -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2442: Fixing transiently failing test [kafka]
b-goyal commented on PR #160: URL: https://github.com/apache/kafka/pull/160#issuecomment-1997291371 Made the following changes: 1. Made the quotas very small. (100 and 10 bytes/sec for producer and consumer respectively) 2. For the producer, I'm asserting the throttle_time with a timed loop using waitUntilTrue 3. For the consumer, I'm simply calling a timed poll in a loop until the server side throttle time metric returns true -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-294 [kafka]
b-goyal commented on PR #2: URL: https://github.com/apache/kafka/pull/2#issuecomment-1997291693 This issue can be caused by a non-existing path but also a misunderstanding from the config file. A short example will help the user. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2638: Added default properties file to ConsumerPerformance [kafka]
b-goyal commented on PR #302: URL: https://github.com/apache/kafka/pull/302#issuecomment-1997291381 Blocker for SSL integration -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Make private class FetchManagerMetrics as a static private class with [kafka]
b-goyal commented on PR #214: URL: https://github.com/apache/kafka/pull/214#issuecomment-1997291651 reduced visibility for members in Fetcher. Use Collections.singletonList instead of Arrays.asList in the Unit Test -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2534: Fixes and unit tests for SSLTransportLayer buffer overflow [kafka]
b-goyal commented on PR #205: URL: https://github.com/apache/kafka/pull/205#issuecomment-1997291606 Unit tests which mock buffer overflow and underflow in the SSL transport layer and fixes for the couple of issues in buffer overflow handling described in the JIRA. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2419; Garbage collect unused sensors [kafka]
b-goyal commented on PR #233: URL: https://github.com/apache/kafka/pull/233#issuecomment-1997291613 As discussed in KAFKA-2419 - I've added a time based sensor retention config to Sensor. Sensors that have not been "recorded" for 'n' seconds are eligible for expiration. In addition to the time based retention, I've also altered several tests to close the Metrics and scheduler objects since they can cause leaks while running tests. This causes TestUtils.verifyNonDaemonThreadStatus to fail. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] support commit offset after consumed [kafka]
b-goyal commented on PR #207: URL: https://github.com/apache/kafka/pull/207#issuecomment-1997291340 Kafka Of Original Version do not support "commit offset after consuming",when kafka.consumer.ConsumerIterator[K, V].next() return a MessageAndMetadata,which consumed offset is already being set,and commit thread maybe commit this offset before "consuming process finished",when jvm restart or being down,this msg will not be consumed next time -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Fix for KAFKA-2446 [kafka]
b-goyal commented on PR #152: URL: https://github.com/apache/kafka/pull/152#issuecomment-1997291313 This bug was introduced while committing KAFKA-2205. Basically, the path for topic overrides was renamed to "topic" from "topics". However, this causes existing topic config overrides to break because they will not be read from ZK anymore since the path is different. https://reviews.apache.org/r/34554/ -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2320; Test commit [kafka]
b-goyal commented on PR #76: URL: https://github.com/apache/kafka/pull/76#issuecomment-1997291570 Trying to test the CI build created via KAFKA-2320. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2562: update kafka scripts to use new tools/code [kafka]
b-goyal commented on PR #242: URL: https://github.com/apache/kafka/pull/242#issuecomment-1997291551 Updated kafka-producer-perf-test.sh to use org.apache.kafka.clients.tools.ProducerPerformance. Updated build.gradle to add kafka-tools-0.9.0.0-SNAPSHOT.jar to kafka/libs folder. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-948 : Update ReplicaStateMachine.scala [kafka]
b-goyal commented on PR #5: URL: https://github.com/apache/kafka/pull/5#issuecomment-1997291530 KAFKA-948 When the broker which is the leader for a partition is down, the ISR list in the LeaderAndISR path is updated. But if the broker , which is not a leader of the partition is down, the ISR list is not getting updated. This is an issue because ISR list contains the stale entry. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Should stop offset backing store in Copycat Worker's stop method [kafka]
b-goyal commented on PR #232: URL: https://github.com/apache/kafka/pull/232#issuecomment-1997291473 @ewencp @gwenshap This is a trivial bug fix -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Modified the async producer so it re-queues failed batches. [kafka]
b-goyal commented on PR #7: URL: https://github.com/apache/kafka/pull/7#issuecomment-1997289604 I'm working on an application that needs the throughput offered by an async producer but also needs to handle send failures gracefully like a sync producer. I modified the ProducerSendThread so it will re-queue failed batches. This allowed me to determine the behavior I wanted from my producer with the queue config parameters. A "queue.enqueue.timeout.ms=0" allowed me to get runtime exceptions when sends failed often enough to fill the queue. This also allowed me to use "queue.buffering.max.messages" to control how tolerant the application is to network blips. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Upgrade Gradle wrapper to Gradle 2.0 [kafka]
b-goyal commented on PR #29: URL: https://github.com/apache/kafka/pull/29#issuecomment-1997291512 This patch upgrades gradle wrapper to gradle 2.0. As consequence license plugin dependency had to be upgraded as well. Issue: KAFKA-1559 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-1833: OfflinePartitionLeaderSelector may return null leader when ISR and Assgi... [kafka]
b-goyal commented on PR #39: URL: https://github.com/apache/kafka/pull/39#issuecomment-1997291408 PR for [KAFKA-1833](https://issues.apache.org/jira/browse/KAFKA-1833) In OfflinePartitonLeaderSelector::selectLeader, when liveBrokerInIsr is not empty and have no common broker with liveAssignedreplicas, selectLeader will return no leader; -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Unmap before resizing [kafka]
b-goyal commented on PR #6: URL: https://github.com/apache/kafka/pull/6#issuecomment-1997291402 While I was studying how MappedByteBuffer works, I saw a sharing runtime exception on Windows. I applied what I learned to generate a patch which uses an internal open JDK API to solve this problem. --- Caused by: java.io.IOException: The requested operation cannot be performed on a file with a user-mapped section open at java.io.RandomAccessFile.setLength(Native Method) at kafka.log.OffsetIndex.liftedTree2$1(OffsetIndex.scala:263) at kafka.log.OffsetIndex.resize(OffsetIndex.scala:262) at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:247) at kafka.log.Log.rollToOffset(Log.scala:518) at kafka.log.Log.roll(Log.scala:502) at kafka.log.Log.maybeRoll(Log.scala:484) at kafka.log.Log.append(Log.scala:297) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Unmap before resizing [kafka]
b-goyal commented on PR #6: URL: https://github.com/apache/kafka/pull/6#issuecomment-1997289579 While I was studying how MappedByteBuffer works, I saw a sharing runtime exception on Windows. I applied what I learned to generate a patch which uses an internal open JDK API to solve this problem. --- Caused by: java.io.IOException: The requested operation cannot be performed on a file with a user-mapped section open at java.io.RandomAccessFile.setLength(Native Method) at kafka.log.OffsetIndex.liftedTree2$1(OffsetIndex.scala:263) at kafka.log.OffsetIndex.resize(OffsetIndex.scala:262) at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:247) at kafka.log.Log.rollToOffset(Log.scala:518) at kafka.log.Log.roll(Log.scala:502) at kafka.log.Log.maybeRoll(Log.scala:484) at kafka.log.Log.append(Log.scala:297) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2055; Fix transient ConsumerBounceTest.testSeekAndCommitWithBro… [kafka]
b-goyal commented on PR #98: URL: https://github.com/apache/kafka/pull/98#issuecomment-1997291356 …kerFailures failure; -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-1054; Eliminate Scala Compilation Warnings [kafka]
b-goyal commented on PR #57: URL: https://github.com/apache/kafka/pull/57#issuecomment-1997291322 Changes: - Suppressed compiler warnings about type erasure in matching via unboxing by Jon Riehl. - Suppressed warning caused by slight difference in input function type by John Riehl. - Fix compiler warnings: ServerShutdownTest, DelayedJoinGroup function signature by Blake Smith. - Fix Scala 2.11 warnings. `Pair` has been deprecated, `try` without `catch` and `finally` is useless and initialisation order fix by Ismael Juma. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] commitOffsets can be passed the offsets to commit [kafka]
b-goyal commented on PR #10: URL: https://github.com/apache/kafka/pull/10#issuecomment-1997289614 This adds another version of `commitOffsets` that takes the offsets to commit as a parameter. Without this change, getting correct user code is very hard. Despite kafka's at-least-once guarantees, most user code doesn't actually have that guarantee, and is almost certainly wrong if doing batch processing. Getting it right requires some very careful synchronization between all consumer threads, which is both: 1) painful to get right 2) slow b/c of the need to stop all workers during a commit. This small change simplifies a lot of this. This was discussed extensively on the user mailing list, on the thread "are kafka consumer apps guaranteed to see msgs at least once?" You can also see an example implementation of a user api which makes use of this, to get proper at-least-once guarantees by _user_ code, even for batches: https://github.com/quantifind/kafka-utils/pull/1 I'm open to any suggestions on how to add unit tests for this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Modified the async producer so it re-queues failed batches. [kafka]
b-goyal commented on PR #7: URL: https://github.com/apache/kafka/pull/7#issuecomment-1997289594 I'm working on an application that needs the throughput offered by an async producer but also needs to handle send failures gracefully like a sync producer. I modified the ProducerSendThread so it will re-queue failed batches. This allowed me to determine the behavior I wanted from my producer with the queue config parameters. A "queue.enqueue.timeout.ms=0" allowed me to get runtime exceptions when sends failed often enough to fill the queue. This also allowed me to use "queue.buffering.max.messages" to control how tolerant the application is to network blips. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2071: Replace Producer Request/Response with their org.apache.kafka.common.requests equivalents [kafka]
b-goyal commented on PR #110: URL: https://github.com/apache/kafka/pull/110#issuecomment-1997289591 This PR replaces all occurrences of kafka.api.ProducerRequest/ProducerResponse by their common equivalents. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Unmap before resizing [kafka]
b-goyal commented on PR #6: URL: https://github.com/apache/kafka/pull/6#issuecomment-1997289573 While I was studying how MappedByteBuffer works, I saw a sharing runtime exception on Windows. I applied what I learned to generate a patch which uses an internal open JDK API to solve this problem. --- Caused by: java.io.IOException: The requested operation cannot be performed on a file with a user-mapped section open at java.io.RandomAccessFile.setLength(Native Method) at kafka.log.OffsetIndex.liftedTree2$1(OffsetIndex.scala:263) at kafka.log.OffsetIndex.resize(OffsetIndex.scala:262) at kafka.log.OffsetIndex.trimToValidSize(OffsetIndex.scala:247) at kafka.log.Log.rollToOffset(Log.scala:518) at kafka.log.Log.roll(Log.scala:502) at kafka.log.Log.maybeRoll(Log.scala:484) at kafka.log.Log.append(Log.scala:297) -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2486; fix performance regression in new consumer [kafka]
b-goyal commented on PR #180: URL: https://github.com/apache/kafka/pull/180#issuecomment-1997289535 The sleep() in KafkaConsumer's poll blocked any pending IO from being completed and created a performance bottleneck. It was intended to implement the fetch backoff behavior, but that was a misunderstanding of the setting "retry.backoff.ms" which should only affect failed fetches. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-948 : Update ReplicaStateMachine.scala [kafka]
b-goyal commented on PR #5: URL: https://github.com/apache/kafka/pull/5#issuecomment-1997289577 KAFKA-948 When the broker which is the leader for a partition is down, the ISR list in the LeaderAndISR path is updated. But if the broker , which is not a leader of the partition is down, the ISR list is not getting updated. This is an issue because ISR list contains the stale entry. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2527; System Test for Quotas in Ducktape [kafka]
b-goyal commented on PR #275: URL: https://github.com/apache/kafka/pull/275#issuecomment-1997289572 @granders Can you take a look at this quota system test? -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] zkclient and scalatest library updates [kafka]
b-goyal commented on PR #3: URL: https://github.com/apache/kafka/pull/3#issuecomment-1997289563 Following https://issues.apache.org/jira/browse/KAFKA-826 I forked the code and included fixes for 2 bugs I reported, https://issues.apache.org/jira/browse/KAFKA-807 and https://issues.apache.org/jira/browse/KAFKA-809 All tests pass except kafka.log.LogTest which fails on my Mac--I don't think it is related to the zkclient fix, I could be wrong. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-294 [kafka]
b-goyal commented on PR #2: URL: https://github.com/apache/kafka/pull/2#issuecomment-1997289550 This issue can be caused by a non-existing path but also a misunderstanding from the config file. A short example will help the user. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Replace "it's" with "its" where appropriate [kafka]
b-goyal commented on PR #186: URL: https://github.com/apache/kafka/pull/186#issuecomment-1997289545 No Jira ticket created, as the Contributing Code Changes doc says it's not necessary for javadoc typo fixes. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] zkclient and scalatest library updates [kafka]
b-goyal commented on PR #3: URL: https://github.com/apache/kafka/pull/3#issuecomment-1997289564 Following https://issues.apache.org/jira/browse/KAFKA-826 I forked the code and included fixes for 2 bugs I reported, https://issues.apache.org/jira/browse/KAFKA-807 and https://issues.apache.org/jira/browse/KAFKA-809 All tests pass except kafka.log.LogTest which fails on my Mac--I don't think it is related to the zkclient fix, I could be wrong. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Switch to using scala 2.9.2 [kafka]
b-goyal commented on PR #1: URL: https://github.com/apache/kafka/pull/1#issuecomment-1997289540 Compiled and used fine. I had issues with the tests though. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-294 [kafka]
b-goyal commented on PR #2: URL: https://github.com/apache/kafka/pull/2#issuecomment-1997289547 This issue can be caused by a non-existing path but also a misunderstanding from the config file. A short example will help the user. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] Switch to using scala 2.9.2 [kafka]
b-goyal commented on PR #1: URL: https://github.com/apache/kafka/pull/1#issuecomment-1997289548 Compiled and used fine. I had issues with the tests though. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] validate configuration while parsing ConfigDef [kafka]
b-goyal commented on PR #66: URL: https://github.com/apache/kafka/pull/66#issuecomment-1997289518 In java client, configDef class's parse() function should return parsed and validated values (if validator is present) but it does not validate the configuration. e.g. i can create a KafkaProducer instance with a negative value for batch.size and then it throws NullPointerException while sending which is hard to debug. This commit fix that -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: flush record collector after local state flush [kafka]
b-goyal commented on PR #304: URL: https://github.com/apache/kafka/pull/304#issuecomment-1997289489 @guozhangwang Fix the order of flushing. Undoing the change I did sometime ago. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2594 Added InMemoryLRUCacheStore [kafka]
b-goyal commented on PR #256: URL: https://github.com/apache/kafka/pull/256#issuecomment-1997289508 Added a new `KeyValueStore` implementation called `InMemoryLRUCacheStore` that keeps a maximum number of entries in-memory, and as the size exceeds the capacity the least-recently used entry is removed from the store and the backing topic. Also added unit tests for this new store and the existing `InMemoryKeyValueStore` and `RocksDBKeyValueStore` implementations. A new `KeyValueStoreTestDriver` class simplifies all of the other tests, and can be used by other libraries to help test their own custom implementations. This PR depends upon [KAFKA-2593](https://issues.apache.org/jira/browse/KAFKA-2593) and its PR at https://github.com/apache/kafka/pull/255. Once that PR is merged, I can rebase this PR if desired. Two issues were uncovered when creating these new unit tests, and both are also addressed as separate (small) commits in this PR: - The `RocksDBKeyValueStore` initialization was not creating the file system directory if missing. - `MeteredKeyValueStore` was casting to `ProcessorContextImpl` to access the `RecordCollector`, which prevent using `MeteredKeyValueStore` implementations in tests where something other than `ProcessorContextImpl` was used. The fix was to introduce a `RecordCollector.Supplier` interface to define this `recordCollector()` method, and change `ProcessorContextImpl` and `MockProcessorContext` to both implement this interface. Now, `MeteredKeyValueStore` can cast to the new interface to access the record collector rather than to a single concrete implementation, making it possible to use any and all current stores inside unit tests. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-2548; kafka-merge-pr tool fails to update JIRA with fix version 0.9.0.0 [kafka]
b-goyal commented on PR #238: URL: https://github.com/apache/kafka/pull/238#issuecomment-1997289449 Simplified the logic to choose the default fix version. We just hardcode it for `trunk` and try to compute it based on the branch name for the rest. Removed logic that tries to handle forked release branches as it seems to be specific to how the Spark project handles their JIRA. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] HOTFIX: Persistent store in ProcessorStateManagerTest [kafka]
b-goyal commented on PR #276: URL: https://github.com/apache/kafka/pull/276#issuecomment-1997289406 @ymatsuda @junrao Could you take a quick look? The current unit test is failing on this. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] KAFKA-948 : Update ReplicaStateMachine.scala [kafka]
b-goyal commented on PR #5: URL: https://github.com/apache/kafka/pull/5#issuecomment-1997289575 KAFKA-948 When the broker which is the leader for a partition is down, the ISR list in the LeaderAndISR path is updated. But if the broker , which is not a leader of the partition is down, the ISR list is not getting updated. This is an issue because ISR list contains the stale entry. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: typing ProcessorDef [kafka]
b-goyal commented on PR #289: URL: https://github.com/apache/kafka/pull/289#issuecomment-1997289207 @guozhangwang This code change properly types ProcessorDef. This also makes KStream.process() typesafe. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Fixed ConsumerRecord constructor javadoc [kafka]
b-goyal commented on PR #85: URL: https://github.com/apache/kafka/pull/85#issuecomment-1997289189 Refactoring of ConsumerRecord made in https://github.com/apache/kafka/commit/0699ff2ce60abb466cab5315977a224f1a70a4da#diff-fafe8d3a3942f3c6394927881a9389b2 left ConsumerRecord constructor javadoc inconsistent with implementation. This patch fixes ConsumerRecord constructor javadoc to be inline with implementation. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
Re: [PR] MINOR: Added to .gitignore Kafka server logs directory [kafka]
b-goyal commented on PR #94: URL: https://github.com/apache/kafka/pull/94#issuecomment-1997289167 When running Kafka server from sources, logs directory gets created in root of repository, and kafka server logs end up there. Currently that directory is not ignored by git. This change adds root logs directory to .gitignore so that Kafka server logs are ignored and do not get tracked by git. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org