[jira] [Commented] (KAFKA-2607) Review `Time` interface and its usage
[ https://issues.apache.org/jira/browse/KAFKA-2607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16429975#comment-16429975 ] Guozhang Wang commented on KAFKA-2607: -- Thanks for sharing Ismael. This is a great read. I vaguely remember in J7 the perf difference of nanoTime is indeed non-negligible, but I cannot find the source of my memory now... > Review `Time` interface and its usage > - > > Key: KAFKA-2607 > URL: https://issues.apache.org/jira/browse/KAFKA-2607 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.11.0.0, 1.0.0 >Reporter: Ismael Juma >Priority: Major > Labels: newbie > > Two of `Time` interface's methods are `milliseconds` and `nanoseconds` which > are implemented in `SystemTime` as follows: > {code} > @Override > public long milliseconds() { > return System.currentTimeMillis(); > } > @Override > public long nanoseconds() { > return System.nanoTime(); > } > {code} > The issue with this interface is that it makes it seem that the difference is > about the unit (`ms` versus `ns`) whereas it's much more than that: > https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks > We should probably change the names of the methods and review our usage to > see if we're using the right one in the various places. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-2607) Review `Time` interface and its usage
[ https://issues.apache.org/jira/browse/KAFKA-2607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16429906#comment-16429906 ] Ismael Juma commented on KAFKA-2607: The following states that nanoTime and currentTimeMillis perform similarly on Linux: http://pzemtsov.github.io/2017/07/23/the-slow-currenttimemillis.html > Review `Time` interface and its usage > - > > Key: KAFKA-2607 > URL: https://issues.apache.org/jira/browse/KAFKA-2607 > Project: Kafka > Issue Type: Improvement >Affects Versions: 0.11.0.0, 1.0.0 >Reporter: Ismael Juma >Priority: Major > Labels: newbie > > Two of `Time` interface's methods are `milliseconds` and `nanoseconds` which > are implemented in `SystemTime` as follows: > {code} > @Override > public long milliseconds() { > return System.currentTimeMillis(); > } > @Override > public long nanoseconds() { > return System.nanoTime(); > } > {code} > The issue with this interface is that it makes it seem that the difference is > about the unit (`ms` versus `ns`) whereas it's much more than that: > https://blogs.oracle.com/dholmes/entry/inside_the_hotspot_vm_clocks > We should probably change the names of the methods and review our usage to > see if we're using the right one in the various places. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-6764) ConsoleConsumer behavior inconsistent when specifying --partition with --from-beginning
[ https://issues.apache.org/jira/browse/KAFKA-6764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Larry McQueary updated KAFKA-6764: -- Description: Per its usage statement, {{kafka-console-consumer.sh}} ignores {{\-\-from-beginning}} when the specified consumer group has committed offsets, and sets {{auto.offset.reset}} to {{latest}}. However, if {{\-\-partition}} is also specified, {{\-\-from-beginning}} is observed in all cases, whether there are committed offsets or not. This happens because when {{\-\-from-beginning}} is specified, {{offsetArg}} is set to {{OffsetRequest.EarliestTime}}. However, {{offsetArg}} is [only passed to the constructor|https://github.com/apache/kafka/blob/fedac0cea74fce529ee1c0cefd6af53ecbdd/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L76-L79] for {{NewShinyConsumer}} when {{\-\-partition}} is also specified. Hence, it is honored in this case and not the other. This case should either be handled consistently, or the usage statement should be modified to indicate the actual behavior/usage. was: Per its usage statement, {{kafka-console-consumer.sh}} ignores {{\-\-from-beginning}} when the specified consumer group has committed offsets, and sets {{auto.offset.reset}} to {{latest}}. However, if {{\-\-partition}} is also specified, {{\-\-from-beginning}} is observed in all cases, whether there are committed offsets or not. This happens because when {{\-\-from-beginning}} is specified, {{offsetArg}} is set to {{OffsetRequest.EarliestTime}}. However, {{offsetArg}} is [only passed to the constructor|https://github.com/apache/kafka/blob/fedac0cea74fce529ee1c0cefd6af53ecbdd/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L76-L79] for {{NewShinyConsumer}} when {{\-\-partition}} is also specified. This case should either be handled consistently, or the usage statement should be modified to indicate the actual behavior/usage. > ConsoleConsumer behavior inconsistent when specifying --partition with > --from-beginning > > > Key: KAFKA-6764 > URL: https://issues.apache.org/jira/browse/KAFKA-6764 > Project: Kafka > Issue Type: Bug > Components: consumer >Reporter: Larry McQueary >Assignee: Larry McQueary >Priority: Minor > Labels: newbie > > Per its usage statement, {{kafka-console-consumer.sh}} ignores > {{\-\-from-beginning}} when the specified consumer group has committed > offsets, and sets {{auto.offset.reset}} to {{latest}}. However, if > {{\-\-partition}} is also specified, {{\-\-from-beginning}} is observed in > all cases, whether there are committed offsets or not. > This happens because when {{\-\-from-beginning}} is specified, {{offsetArg}} > is set to {{OffsetRequest.EarliestTime}}. However, {{offsetArg}} is [only > passed to the > constructor|https://github.com/apache/kafka/blob/fedac0cea74fce529ee1c0cefd6af53ecbdd/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L76-L79] > for {{NewShinyConsumer}} when {{\-\-partition}} is also specified. Hence, it > is honored in this case and not the other. > This case should either be handled consistently, or the usage statement > should be modified to indicate the actual behavior/usage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (KAFKA-6764) ConsoleConsumer behavior inconsistent when specifying --partition with --from-beginning
[ https://issues.apache.org/jira/browse/KAFKA-6764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Larry McQueary reassigned KAFKA-6764: - Assignee: Larry McQueary > ConsoleConsumer behavior inconsistent when specifying --partition with > --from-beginning > > > Key: KAFKA-6764 > URL: https://issues.apache.org/jira/browse/KAFKA-6764 > Project: Kafka > Issue Type: Bug > Components: consumer >Reporter: Larry McQueary >Assignee: Larry McQueary >Priority: Minor > Labels: newbie > > Per its usage statement, {{kafka-console-consumer.sh}} ignores > {{\-\-from-beginning}} when the specified consumer group has committed > offsets, and sets {{auto.offset.reset}} to {{latest}}. However, if > {{\-\-partition}} is also specified, {{\-\-from-beginning}} is observed in > all cases, whether there are committed offsets or not. > This happens because when {{\-\-from-beginning}} is specified, {{offsetArg}} > is set to {{OffsetRequest.EarliestTime}}. However, {{offsetArg}} is [only > passed to the > constructor|https://github.com/apache/kafka/blob/fedac0cea74fce529ee1c0cefd6af53ecbdd/core/src/main/scala/kafka/tools/ConsoleConsumer.scala#L76-L79] > for {{NewShinyConsumer}} when {{\-\-partition}} is also specified. > This case should either be handled consistently, or the usage statement > should be modified to indicate the actual behavior/usage. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (KAFKA-6628) RocksDBSegmentedBytesStoreTest does not cover time window serdes
[ https://issues.apache.org/jira/browse/KAFKA-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matthias J. Sax updated KAFKA-6628: --- Comment: was deleted (was: Opened below pr for review https://github.com/apache/kafka/pull/4834) > RocksDBSegmentedBytesStoreTest does not cover time window serdes > > > Key: KAFKA-6628 > URL: https://issues.apache.org/jira/browse/KAFKA-6628 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Guozhang Wang >Assignee: Liju >Priority: Major > Labels: newbie, unit-test > > The RocksDBSegmentedBytesStoreTest.java only covers session window serdes, > but not time window serdes. We should fill in this coverage gap. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6752) Unclean leader election metric no longer working
[ https://issues.apache.org/jira/browse/KAFKA-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16429830#comment-16429830 ] ASF GitHub Bot commented on KAFKA-6752: --- omkreddy opened a new pull request #4838: KAFKA-6752: Enable unclean leader election metric URL: https://github.com/apache/kafka/pull/4838 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > Unclean leader election metric no longer working > > > Key: KAFKA-6752 > URL: https://issues.apache.org/jira/browse/KAFKA-6752 > Project: Kafka > Issue Type: Bug > Components: controller >Affects Versions: 1.1.0 >Reporter: Jason Gustafson >Assignee: Manikumar >Priority: Major > > Happened to notice that the unclean leader election meter is no longer being > updated. This was probably lost during the controller overhaul. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (KAFKA-6752) Unclean leader election metric no longer working
[ https://issues.apache.org/jira/browse/KAFKA-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manikumar reassigned KAFKA-6752: Assignee: Manikumar > Unclean leader election metric no longer working > > > Key: KAFKA-6752 > URL: https://issues.apache.org/jira/browse/KAFKA-6752 > Project: Kafka > Issue Type: Bug > Components: controller >Affects Versions: 1.1.0 >Reporter: Jason Gustafson >Assignee: Manikumar >Priority: Major > > Happened to notice that the unclean leader election meter is no longer being > updated. This was probably lost during the controller overhaul. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-6736) ReassignPartitionsClusterTest#shouldMoveSubsetOfPartitions is flaky
[ https://issues.apache.org/jira/browse/KAFKA-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated KAFKA-6736: -- Description: Saw this from https://builds.apache.org/job/kafka-trunk-jdk8/2518/testReport/junit/kafka.admin/ReassignPartitionsClusterTest/shouldMoveSubsetOfPartitions/ : {code} kafka.common.AdminCommandFailedException: Partition reassignment currently in progress for Map(topic1-0 -> Buffer(100, 102), topic1-2 -> Buffer(100, 102), topic2-1 -> Buffer(101, 100), topic2-2 -> Buffer(100, 102)). Aborting operation at kafka.admin.ReassignPartitionsCommand.reassignPartitions(ReassignPartitionsCommand.scala:612) at kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:215) at kafka.admin.ReassignPartitionsClusterTest.shouldMoveSubsetOfPartitions(ReassignPartitionsClusterTest.scala:242) {code} was: Saw this from https://builds.apache.org/job/kafka-trunk-jdk8/2518/testReport/junit/kafka.admin/ReassignPartitionsClusterTest/shouldMoveSubsetOfPartitions/ : {code} kafka.common.AdminCommandFailedException: Partition reassignment currently in progress for Map(topic1-0 -> Buffer(100, 102), topic1-2 -> Buffer(100, 102), topic2-1 -> Buffer(101, 100), topic2-2 -> Buffer(100, 102)). Aborting operation at kafka.admin.ReassignPartitionsCommand.reassignPartitions(ReassignPartitionsCommand.scala:612) at kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:215) at kafka.admin.ReassignPartitionsClusterTest.shouldMoveSubsetOfPartitions(ReassignPartitionsClusterTest.scala:242) {code} > ReassignPartitionsClusterTest#shouldMoveSubsetOfPartitions is flaky > --- > > Key: KAFKA-6736 > URL: https://issues.apache.org/jira/browse/KAFKA-6736 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > Saw this from > https://builds.apache.org/job/kafka-trunk-jdk8/2518/testReport/junit/kafka.admin/ReassignPartitionsClusterTest/shouldMoveSubsetOfPartitions/ > : > {code} > kafka.common.AdminCommandFailedException: Partition reassignment currently in > progress for Map(topic1-0 -> Buffer(100, 102), topic1-2 -> Buffer(100, 102), > topic2-1 -> Buffer(101, 100), topic2-2 -> Buffer(100, 102)). Aborting > operation > at > kafka.admin.ReassignPartitionsCommand.reassignPartitions(ReassignPartitionsCommand.scala:612) > at > kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:215) > at > kafka.admin.ReassignPartitionsClusterTest.shouldMoveSubsetOfPartitions(ReassignPartitionsClusterTest.scala:242) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-6735) Document how to skip findbugs / checkstyle when running unit test
[ https://issues.apache.org/jira/browse/KAFKA-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated KAFKA-6735: -- Description: Even when running single unit test, findbugs dependency would result in some time spent before the test is actually run. We should document how findbugs dependency can be skipped in such scenario: {code} -x findbugsMain -x findbugsTest -x checkStyleMain -x checkStyleTest {code} was: Even when running single unit test, findbugs dependency would result in some time spent before the test is actually run. We should document how findbugs dependency can be skipped in such scenario: {code} -x findbugsMain -x findbugsTest -x checkStyleMain -x checkStyleTest {code} > Document how to skip findbugs / checkstyle when running unit test > - > > Key: KAFKA-6735 > URL: https://issues.apache.org/jira/browse/KAFKA-6735 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > > Even when running single unit test, findbugs dependency would result in some > time spent before the test is actually run. > We should document how findbugs dependency can be skipped in such scenario: > {code} > -x findbugsMain -x findbugsTest -x checkStyleMain -x checkStyleTest > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (KAFKA-5943) Reduce dependency on mock in connector tests
[ https://issues.apache.org/jira/browse/KAFKA-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated KAFKA-5943: -- Description: Currently connector tests make heavy use of mock (easymock, power mock). This may hide the real logic behind operations and makes finding bugs difficult. We should reduce the use of mocks so that developers can debug connector code using unit tests. This would shorten the development cycle for connector. was: Currently connector tests make heavy use of mock (easymock, power mock). This may hide the real logic behind operations and makes finding bugs difficult. We should reduce the use of mocks so that developers can debug connector code using unit tests. This would shorten the development cycle for connector. > Reduce dependency on mock in connector tests > > > Key: KAFKA-5943 > URL: https://issues.apache.org/jira/browse/KAFKA-5943 > Project: Kafka > Issue Type: Test >Reporter: Ted Yu >Priority: Minor > Labels: connector > > Currently connector tests make heavy use of mock (easymock, power mock). > This may hide the real logic behind operations and makes finding bugs > difficult. > We should reduce the use of mocks so that developers can debug connector code > using unit tests. > This would shorten the development cycle for connector. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-5946) Give connector method parameter better name
[ https://issues.apache.org/jira/browse/KAFKA-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16392411#comment-16392411 ] Ted Yu edited comment on KAFKA-5946 at 4/8/18 3:35 PM: --- Thanks for taking it . was (Author: yuzhih...@gmail.com): Thanks for taking it. > Give connector method parameter better name > --- > > Key: KAFKA-5946 > URL: https://issues.apache.org/jira/browse/KAFKA-5946 > Project: Kafka > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Tanvi Jaywant >Priority: Major > Labels: connector, newbie > > During the development of KAFKA-5657, there were several iterations where > method call didn't match what the connector parameter actually represents. > [~ewencp] had used connType as equivalent to connClass because Type wasn't > used to differentiate source vs sink. > [~ewencp] proposed the following: > {code} > It would help to convert all the uses of connType to connClass first, then > standardize on class == java class, type == source/sink, name == > user-specified name. > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6628) RocksDBSegmentedBytesStoreTest does not cover time window serdes
[ https://issues.apache.org/jira/browse/KAFKA-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16429762#comment-16429762 ] Liju commented on KAFKA-6628: - Opened below pr for review https://github.com/apache/kafka/pull/4834 > RocksDBSegmentedBytesStoreTest does not cover time window serdes > > > Key: KAFKA-6628 > URL: https://issues.apache.org/jira/browse/KAFKA-6628 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Guozhang Wang >Assignee: Liju >Priority: Major > Labels: newbie, unit-test > > The RocksDBSegmentedBytesStoreTest.java only covers session window serdes, > but not time window serdes. We should fill in this coverage gap. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (KAFKA-6628) RocksDBSegmentedBytesStoreTest does not cover time window serdes
[ https://issues.apache.org/jira/browse/KAFKA-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liju updated KAFKA-6628: Comment: was deleted (was: Open below pr for review - [https://github.com/apache/kafka/pull/4834]) > RocksDBSegmentedBytesStoreTest does not cover time window serdes > > > Key: KAFKA-6628 > URL: https://issues.apache.org/jira/browse/KAFKA-6628 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Guozhang Wang >Assignee: Liju >Priority: Major > Labels: newbie, unit-test > > The RocksDBSegmentedBytesStoreTest.java only covers session window serdes, > but not time window serdes. We should fill in this coverage gap. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6628) RocksDBSegmentedBytesStoreTest does not cover time window serdes
[ https://issues.apache.org/jira/browse/KAFKA-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16429760#comment-16429760 ] Liju commented on KAFKA-6628: - Open below pr for review - https://github.com/apache/kafka/pull/4834 > RocksDBSegmentedBytesStoreTest does not cover time window serdes > > > Key: KAFKA-6628 > URL: https://issues.apache.org/jira/browse/KAFKA-6628 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Guozhang Wang >Assignee: Liju >Priority: Major > Labels: newbie, unit-test > > The RocksDBSegmentedBytesStoreTest.java only covers session window serdes, > but not time window serdes. We should fill in this coverage gap. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (KAFKA-6628) RocksDBSegmentedBytesStoreTest does not cover time window serdes
[ https://issues.apache.org/jira/browse/KAFKA-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16429760#comment-16429760 ] Liju edited comment on KAFKA-6628 at 4/8/18 1:40 PM: - Open below pr for review - [https://github.com/apache/kafka/pull/4834] was (Author: lijubjohn): Open below pr for review - https://github.com/apache/kafka/pull/4834 > RocksDBSegmentedBytesStoreTest does not cover time window serdes > > > Key: KAFKA-6628 > URL: https://issues.apache.org/jira/browse/KAFKA-6628 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Guozhang Wang >Assignee: Liju >Priority: Major > Labels: newbie, unit-test > > The RocksDBSegmentedBytesStoreTest.java only covers session window serdes, > but not time window serdes. We should fill in this coverage gap. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6628) RocksDBSegmentedBytesStoreTest does not cover time window serdes
[ https://issues.apache.org/jira/browse/KAFKA-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16429752#comment-16429752 ] Liju commented on KAFKA-6628: - Hi [~guozhang] / [~mjsax] I have opened below pr for this ticket , could any of you take a look [https://github.com/apache/kafka/pull/4834] > RocksDBSegmentedBytesStoreTest does not cover time window serdes > > > Key: KAFKA-6628 > URL: https://issues.apache.org/jira/browse/KAFKA-6628 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Guozhang Wang >Assignee: Liju >Priority: Major > Labels: newbie, unit-test > > The RocksDBSegmentedBytesStoreTest.java only covers session window serdes, > but not time window serdes. We should fill in this coverage gap. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (KAFKA-6628) RocksDBSegmentedBytesStoreTest does not cover time window serdes
[ https://issues.apache.org/jira/browse/KAFKA-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Liju updated KAFKA-6628: Comment: was deleted (was: Hi [~guozhang] / [~mjsax] I have opened below pr for this ticket , could any of you take a look [https://github.com/apache/kafka/pull/4834] ) > RocksDBSegmentedBytesStoreTest does not cover time window serdes > > > Key: KAFKA-6628 > URL: https://issues.apache.org/jira/browse/KAFKA-6628 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Guozhang Wang >Assignee: Liju >Priority: Major > Labels: newbie, unit-test > > The RocksDBSegmentedBytesStoreTest.java only covers session window serdes, > but not time window serdes. We should fill in this coverage gap. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-5706) log the name of the error instead of the error code in response objects
[ https://issues.apache.org/jira/browse/KAFKA-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16429742#comment-16429742 ] Manasvi Gupta commented on KAFKA-5706: -- I also don't see a toString() function implemented in this class - Errors.java. > log the name of the error instead of the error code in response objects > --- > > Key: KAFKA-5706 > URL: https://issues.apache.org/jira/browse/KAFKA-5706 > Project: Kafka > Issue Type: Improvement > Components: clients, core >Affects Versions: 0.11.0.0 >Reporter: Jun Rao >Assignee: Manasvi Gupta >Priority: Major > Labels: newbie > > Currently, when logging the error code in the response objects, we simply log > response.toString(), which contains the error code. It will be useful to log > the name of the corresponding exception for the error, which is more > meaningful than an error code. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-5706) log the name of the error instead of the error code in response objects
[ https://issues.apache.org/jira/browse/KAFKA-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16429739#comment-16429739 ] Manasvi Gupta commented on KAFKA-5706: -- Hi [~junrao], 1) Is there an example of current error message and desired one? 2) While digging through the code, I found enum *Errors.java* which provides message() function that returns error messages. Is this right class for this change? Thanks Manasvi > log the name of the error instead of the error code in response objects > --- > > Key: KAFKA-5706 > URL: https://issues.apache.org/jira/browse/KAFKA-5706 > Project: Kafka > Issue Type: Improvement > Components: clients, core >Affects Versions: 0.11.0.0 >Reporter: Jun Rao >Assignee: Manasvi Gupta >Priority: Major > Labels: newbie > > Currently, when logging the error code in the response objects, we simply log > response.toString(), which contains the error code. It will be useful to log > the name of the corresponding exception for the error, which is more > meaningful than an error code. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6535) Set default retention ms for Streams repartition topics to Long.MAX_VALUE
[ https://issues.apache.org/jira/browse/KAFKA-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16429723#comment-16429723 ] Khaireddine Rezgui commented on KAFKA-6535: --- Thank you, i got the permission, can you take a look in the kip, and return some feedback :) ? [https://cwiki.apache.org/confluence/display/KAFKA/KIP-284%3A+Set+default+retention+ms+for+Streams+repartition+topics+to+Long.MAX_VALUE] Thanks, > Set default retention ms for Streams repartition topics to Long.MAX_VALUE > - > > Key: KAFKA-6535 > URL: https://issues.apache.org/jira/browse/KAFKA-6535 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Guozhang Wang >Assignee: Khaireddine Rezgui >Priority: Major > Labels: needs-kip, newbie > > After KIP-220 / KIP-204, repartition topics in Streams are transient, so it > is better to set its default retention to infinity to allow any records be > pushed to it with old timestamps (think: bootstrapping, re-processing) and > just rely on the purging API to keeping its storage small. > More specifically, in {{RepartitionTopicConfig}} we have a few default > overrides for repartition topic configs, we should just add the override for > {{TopicConfig.RETENTION_MS_CONFIG}} to set it to Long.MAX_VALUE. This still > allows users to override themselves if they want via > {{StreamsConfig.TOPIC_PREFIX}}. We need to add unit test to verify this > update takes effect. > In addition to the code change, we also need to have doc changes in > streams/upgrade_guide.html specifying this default value change. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6628) RocksDBSegmentedBytesStoreTest does not cover time window serdes
[ https://issues.apache.org/jira/browse/KAFKA-6628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16429721#comment-16429721 ] ASF GitHub Bot commented on KAFKA-6628: --- lijubjohn opened a new pull request #4836: KAFKA-6628: RocksDBSegmentedBytesStoreTest does not cover time window serdes URL: https://github.com/apache/kafka/pull/4836 Updated RocksDBSegmentedBytesStoreTest class to include time window serdes This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org > RocksDBSegmentedBytesStoreTest does not cover time window serdes > > > Key: KAFKA-6628 > URL: https://issues.apache.org/jira/browse/KAFKA-6628 > Project: Kafka > Issue Type: Improvement > Components: streams >Reporter: Guozhang Wang >Assignee: Liju >Priority: Major > Labels: newbie, unit-test > > The RocksDBSegmentedBytesStoreTest.java only covers session window serdes, > but not time window serdes. We should fill in this coverage gap. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (KAFKA-6688) The Trogdor coordinator should track task statuses
[ https://issues.apache.org/jira/browse/KAFKA-6688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajini Sivaram resolved KAFKA-6688. --- Resolution: Fixed Reviewer: Rajini Sivaram Fix Version/s: 1.2.0 > The Trogdor coordinator should track task statuses > -- > > Key: KAFKA-6688 > URL: https://issues.apache.org/jira/browse/KAFKA-6688 > Project: Kafka > Issue Type: Improvement > Components: system tests >Reporter: Colin P. McCabe >Assignee: Colin P. McCabe >Priority: Major > Fix For: 1.2.0 > > > The Trogdor coordinator should track task statuses -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (KAFKA-6688) The Trogdor coordinator should track task statuses
[ https://issues.apache.org/jira/browse/KAFKA-6688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16429678#comment-16429678 ] ASF GitHub Bot commented on KAFKA-6688: --- rajinisivaram closed pull request #4737: KAFKA-6688. The Trogdor coordinator should track task statuses URL: https://github.com/apache/kafka/pull/4737 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/tools/src/main/java/org/apache/kafka/trogdor/agent/WorkerManager.java b/tools/src/main/java/org/apache/kafka/trogdor/agent/WorkerManager.java index cda77738d8c..7c8de6d3f22 100644 --- a/tools/src/main/java/org/apache/kafka/trogdor/agent/WorkerManager.java +++ b/tools/src/main/java/org/apache/kafka/trogdor/agent/WorkerManager.java @@ -22,6 +22,7 @@ import org.apache.kafka.common.internals.KafkaFutureImpl; import org.apache.kafka.common.utils.Scheduler; import org.apache.kafka.common.utils.Time; +import org.apache.kafka.common.utils.Utils; import org.apache.kafka.trogdor.common.Platform; import org.apache.kafka.trogdor.common.ThreadUtils; import org.apache.kafka.trogdor.rest.WorkerDone; @@ -29,6 +30,7 @@ import org.apache.kafka.trogdor.rest.WorkerStarting; import org.apache.kafka.trogdor.rest.WorkerStopping; import org.apache.kafka.trogdor.rest.WorkerState; +import org.apache.kafka.trogdor.task.AgentWorkerStatusTracker; import org.apache.kafka.trogdor.task.TaskSpec; import org.apache.kafka.trogdor.task.TaskWorker; import org.slf4j.Logger; @@ -43,7 +45,6 @@ import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicBoolean; -import java.util.concurrent.atomic.AtomicReference; public final class WorkerManager { private static final Logger log = LoggerFactory.getLogger(WorkerManager.class); @@ -190,7 +191,7 @@ synchronized void waitForQuiescence() throws InterruptedException { /** * The worker status. */ -private final AtomicReference status = new AtomicReference<>(""); +private final AgentWorkerStatusTracker status = new AgentWorkerStatusTracker(); /** * The time when this task was started. @@ -293,6 +294,8 @@ public void createWorker(final String id, TaskSpec spec) throws Exception { haltFuture.thenApply(new KafkaFuture.BaseFunction() { @Override public Void apply(String errorString) { +if (errorString == null) +errorString = ""; if (errorString.isEmpty()) { log.info("{}: Worker {} is halting.", nodeName, id); } else { @@ -306,8 +309,9 @@ public Void apply(String errorString) { try { worker.taskWorker.start(platform, worker.status, haltFuture); } catch (Exception e) { +log.info("{}: Worker {} start() exception", nodeName, id, e); stateChangeExecutor.submit(new HandleWorkerHalting(worker, -"worker.start() exception: " + e.getMessage(), true)); +"worker.start() exception: " + Utils.stackTrace(e), true)); } stateChangeExecutor.submit(new FinishCreatingWorker(worker)); } diff --git a/tools/src/main/java/org/apache/kafka/trogdor/coordinator/NodeManager.java b/tools/src/main/java/org/apache/kafka/trogdor/coordinator/NodeManager.java index 0129007aa0d..91ef9c2928a 100644 --- a/tools/src/main/java/org/apache/kafka/trogdor/coordinator/NodeManager.java +++ b/tools/src/main/java/org/apache/kafka/trogdor/coordinator/NodeManager.java @@ -49,7 +49,6 @@ import org.apache.kafka.trogdor.rest.AgentStatusResponse; import org.apache.kafka.trogdor.rest.CreateWorkerRequest; import org.apache.kafka.trogdor.rest.StopWorkerRequest; -import org.apache.kafka.trogdor.rest.WorkerDone; import org.apache.kafka.trogdor.rest.WorkerReceiving; import org.apache.kafka.trogdor.rest.WorkerRunning; import org.apache.kafka.trogdor.rest.WorkerStarting; @@ -192,6 +191,9 @@ public void run() { // agents going down? return; } +if (log.isTraceEnabled()) { +log.trace("{}: got heartbeat status {}", node.name(), agentStatus); +} // Identify workers which we think should be running, but which do not appear // in the agent's response. We need to send startWorker requests for these. for (Map.Entry entry : workers.entrySet()) { @@ -203,40 +205,31 @@ public void run() { } }