[jira] [Updated] (KAFKA-8900) Stalled partition for a consumer group

2019-09-11 Thread Luke Stephenson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8900?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Stephenson updated KAFKA-8900:
---
Summary: Stalled partition for a consumer group  (was: Stalled partitions)

> Stalled partition for a consumer group
> --
>
> Key: KAFKA-8900
> URL: https://issues.apache.org/jira/browse/KAFKA-8900
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.1
>Reporter: Luke Stephenson
>Priority: Major
>
> I'm seeing behaviour where a Scala KafkaConsumer has stalled for 1 partition 
> for a topic.  All other partitions for that topic are successfully being 
> consumed.
> Restarting the consumer process does not resolve the issue.  The consumer is 
> using version 2.3.0 ("org.apache.kafka" % "kafka-clients" % "2.3.0").
> When the consumer starts, I see that it is assigned the partition.  However 
> it then logs:
> {code}
> [Consumer 
> clientId=kafka-bus-router-64c88855cf-hxck7.event-bus-router-consumer.1d1ed7ee-5038-4441-84eb-8080ac130e9a,
>  groupId=event-bus-router] Setting offset for partition 
> maxwell.transactions-22 to the committed offset 
> FetchPosition{offset=275413397, offsetEpoch=Optional[271], 
> currentLeader=LeaderAndEpoch{leader=:-1 (id: -1 rack: null), epoch=271}}
> {code}
> Note that the leader is logged as "-1".  If I search through my application 
> logs for the past couple of days, the only time I ever see this logged on the 
> consumer is for this partition.
> The kafka broker is running version 2.1.1.  On the broker side the logs show:
> {code}
> {"timeMillis":1568087844876,"thread":"kafka-request-handler-1","level":"WARN","loggerName":"state.change.logger","message":"[Broker
>  id=5] Ignoring LeaderAndIsr request from controller 4 with correlation id 
> 15943 epoch 155 for partition maxwell.transactions-22 since its associated 
> leader epoch 270 is not higher than the current leader epoch 
> 270","endOfBatch":false,"loggerFqcn":"org.slf4j.impl.Log4jLoggerAdapter","threadId":72,"threadPriority":5}
> {"timeMillis":1568087844880,"thread":"kafka-request-handler-1","level":"INFO","loggerName":"kafka.server.ReplicaFetcherManager","message":"[ReplicaFetcherManager
>  on broker 5] Removed fetcher for partitions 
> Set(maxwell.transactions-22)","endOfBatch":false,"loggerFqcn":"org.slf4j.impl.Log4jLoggerAdapter","threadId":72,"threadPriority":5}
> {"timeMillis":1568087844880,"thread":"kafka-request-handler-1","level":"INFO","loggerName":"kafka.cluster.Partition","message":"[Partition
>  maxwell.transactions-22 broker=5] maxwell.transactions-22 starts at Leader 
> Epoch 271 from offset 275403423. Previous Leader Epoch was: 
> 270","endOfBatch":false,"loggerFqcn":"org.slf4j.impl.Log4jLoggerAdapter","threadId":72,"threadPriority":5}
> {"timeMillis":1568087844891,"thread":"kafka-request-handler-1","level":"INFO","loggerName":"state.change.logger","message":"[Broker
>  id=5] Skipped the become-leader state change after marking its partition as 
> leader with correlation id 15945 from controller 4 epoch 155 for partition 
> maxwell.transactions-22 (last update controller epoch 155) since it is 
> already the leader for the 
> partition.","endOfBatch":false,"loggerFqcn":"org.slf4j.impl.Log4jLoggerAdapter","threadId":72,"threadPriority":5}
> {code}
> As soon as I restart the broker which is the leader for that partition, the 
> messages flow through to the consumer.
> Given restarts of the consumer don't help, but restarting the broker allows 
> the stalled partition to resume, I'm inclined to think this is an issue with 
> the broker.  Please let me know if I can assist further with investigating or 
> resolving this.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (KAFKA-8900) Stalled partitions

2019-09-11 Thread Luke Stephenson (Jira)
Luke Stephenson created KAFKA-8900:
--

 Summary: Stalled partitions
 Key: KAFKA-8900
 URL: https://issues.apache.org/jira/browse/KAFKA-8900
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.1.1
Reporter: Luke Stephenson


I'm seeing behaviour where a Scala KafkaConsumer has stalled for 1 partition 
for a topic.  All other partitions for that topic are successfully being 
consumed.

Restarting the consumer process does not resolve the issue.  The consumer is 
using version 2.3.0 ("org.apache.kafka" % "kafka-clients" % "2.3.0").

When the consumer starts, I see that it is assigned the partition.  However it 
then logs:
{code}
[Consumer 
clientId=kafka-bus-router-64c88855cf-hxck7.event-bus-router-consumer.1d1ed7ee-5038-4441-84eb-8080ac130e9a,
 groupId=event-bus-router] Setting offset for partition maxwell.transactions-22 
to the committed offset FetchPosition{offset=275413397, 
offsetEpoch=Optional[271], currentLeader=LeaderAndEpoch{leader=:-1 (id: -1 
rack: null), epoch=271}}
{code}

Note that the leader is logged as "-1".  If I search through my application 
logs for the past couple of days, the only time I ever see this logged on the 
consumer is for this partition.

The kafka broker is running version 2.1.1.  On the broker side the logs show:
{code}
{"timeMillis":1568087844876,"thread":"kafka-request-handler-1","level":"WARN","loggerName":"state.change.logger","message":"[Broker
 id=5] Ignoring LeaderAndIsr request from controller 4 with correlation id 
15943 epoch 155 for partition maxwell.transactions-22 since its associated 
leader epoch 270 is not higher than the current leader epoch 
270","endOfBatch":false,"loggerFqcn":"org.slf4j.impl.Log4jLoggerAdapter","threadId":72,"threadPriority":5}
{"timeMillis":1568087844880,"thread":"kafka-request-handler-1","level":"INFO","loggerName":"kafka.server.ReplicaFetcherManager","message":"[ReplicaFetcherManager
 on broker 5] Removed fetcher for partitions 
Set(maxwell.transactions-22)","endOfBatch":false,"loggerFqcn":"org.slf4j.impl.Log4jLoggerAdapter","threadId":72,"threadPriority":5}
{"timeMillis":1568087844880,"thread":"kafka-request-handler-1","level":"INFO","loggerName":"kafka.cluster.Partition","message":"[Partition
 maxwell.transactions-22 broker=5] maxwell.transactions-22 starts at Leader 
Epoch 271 from offset 275403423. Previous Leader Epoch was: 
270","endOfBatch":false,"loggerFqcn":"org.slf4j.impl.Log4jLoggerAdapter","threadId":72,"threadPriority":5}
{"timeMillis":1568087844891,"thread":"kafka-request-handler-1","level":"INFO","loggerName":"state.change.logger","message":"[Broker
 id=5] Skipped the become-leader state change after marking its partition as 
leader with correlation id 15945 from controller 4 epoch 155 for partition 
maxwell.transactions-22 (last update controller epoch 155) since it is already 
the leader for the 
partition.","endOfBatch":false,"loggerFqcn":"org.slf4j.impl.Log4jLoggerAdapter","threadId":72,"threadPriority":5}
{code}

As soon as I restart the broker which is the leader for that partition, the 
messages flow through to the consumer.

Given restarts of the consumer don't help, but restarting the broker allows the 
stalled partition to resume, I'm inclined to think this is an issue with the 
broker.  Please let me know if I can assist further with investigating or 
resolving this.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-7658) Add KStream#toTable to the Streams DSL

2019-09-11 Thread Matthias J. Sax (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928081#comment-16928081
 ] 

Matthias J. Sax commented on KAFKA-7658:


What is your wiki ID so we can give you write access? If you don't have an 
account, just create one. (The Jira and Wiki account are independent from each 
other).

> Add KStream#toTable to the Streams DSL
> --
>
> Key: KAFKA-7658
> URL: https://issues.apache.org/jira/browse/KAFKA-7658
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: needs-kip, newbie
>
> We'd like to add a new API to the KStream object of the Streams DSL:
> {code}
> KTable KStream.toTable()
> KTable KStream.toTable(Materialized)
> {code}
> The function re-interpret the event stream {{KStream}} as a changelog stream 
> {{KTable}}. Note that this should NOT be treated as a syntax-sugar as a dummy 
> {{KStream.reduce()}} function which always take the new value, as it has the 
> following difference: 
> 1) an aggregation operator of {{KStream}} is for aggregating a event stream 
> into an evolving table, which will drop null-values from the input event 
> stream; whereas a {{toTable}} function will completely change the semantics 
> of the input stream from event stream to changelog stream, and null-values 
> will still be serialized, and if the resulted bytes are also null they will 
> be interpreted as "deletes" to the materialized KTable (i.e. tombstones in 
> the changelog stream).
> 2) the aggregation result {{KTable}} will always be materialized, whereas 
> {{toTable}} resulted KTable may only be materialized if the overloaded 
> function with Materialized is used (and if optimization is turned on it may 
> still be only logically materialized if the queryable name is not set).
> Therefore, for users who want to take a event stream into a changelog stream 
> (no matter why they cannot read from the source topic as a changelog stream 
> {{KTable}} at the beginning), they should be using this new API instead of 
> the dummy reduction function.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8885) The Kafka Protocol should Support Optional Tagged Fields

2019-09-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928024#comment-16928024
 ] 

ASF GitHub Bot commented on KAFKA-8885:
---

cmccabe commented on pull request #7325: KAFKA-8885: The Kafka Protocol should 
Support Optional Tagged Fields
URL: https://github.com/apache/kafka/pull/7325
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> The Kafka Protocol should Support Optional Tagged Fields
> 
>
> Key: KAFKA-8885
> URL: https://issues.apache.org/jira/browse/KAFKA-8885
> Project: Kafka
>  Issue Type: New Feature
>Reporter: Colin P. McCabe
>Assignee: Colin P. McCabe
>Priority: Major
>
> Implement KIP-482: The Kafka Protocol should Support Optional Tagged Fields
> See 
> [KIP-482|https://cwiki.apache.org/confluence/display/KAFKA/KIP-482%3A+The+Kafka+Protocol+should+Support+Optional+Tagged+Fields]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8875) CreateTopic API should check topic existence before replication factor

2019-09-11 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-8875:
---
Fix Version/s: 2.3.1

> CreateTopic API should check topic existence before replication factor
> --
>
> Key: KAFKA-8875
> URL: https://issues.apache.org/jira/browse/KAFKA-8875
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: huxihx
>Priority: Major
> Fix For: 2.4.0, 2.3.1
>
>
> If you try to create a topic and the replication factor cannot be satisfied, 
> Kafka will return `INVALID_REPLICATION_FACTOR`. If the topic already exists, 
> we should probably return `TOPIC_ALREADY_EXISTS` instead. You won't see this 
> problem if using TopicCommand because we check existence prior to creating 
> the topic.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8899) Optimize Partition.maybeIncrementLeaderHW

2019-09-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8899?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928013#comment-16928013
 ] 

ASF GitHub Bot commented on KAFKA-8899:
---

lbradstreet commented on pull request #7324: [WIP] KAFKA-8899: avoid 
unnecessary collection generation in Partition.maybeIncrementLeaderHW
URL: https://github.com/apache/kafka/pull/7324
 
 
   *More detailed description of your change,
   if necessary. The PR title and PR message become
   the squashed commit message, so use a separate
   comment to ping reviewers.*
   
   *Summary of testing strategy (including rationale)
   for the feature or bug fix. Unit and/or integration
   tests are expected for any behaviour change and
   system tests should be considered for larger changes.*
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Optimize Partition.maybeIncrementLeaderHW
> -
>
> Key: KAFKA-8899
> URL: https://issues.apache.org/jira/browse/KAFKA-8899
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Affects Versions: 2.3.0, 2.2.1
>Reporter: Lucas Bradstreet
>Priority: Major
>
> Partition.maybeIncrementLeaderHW is in the hot path for 
> ReplicaManager.updateFollowerFetchState. When replicating between brokers 
> with high partition counts, maybeIncrementLeaderHW becomes expensive, with 
> much of the time going to calling Partition.remoteReplicas which performs a 
> toSet conversion. maybeIncrementLeaderHW should avoid generating any 
> intermediate collections when calculating the new HWM.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (KAFKA-8899) Optimize Partition.maybeIncrementLeaderHW

2019-09-11 Thread Lucas Bradstreet (Jira)
Lucas Bradstreet created KAFKA-8899:
---

 Summary: Optimize Partition.maybeIncrementLeaderHW
 Key: KAFKA-8899
 URL: https://issues.apache.org/jira/browse/KAFKA-8899
 Project: Kafka
  Issue Type: Task
  Components: core
Affects Versions: 2.2.1, 2.3.0
Reporter: Lucas Bradstreet


Partition.maybeIncrementLeaderHW is in the hot path for 
ReplicaManager.updateFollowerFetchState. When replicating between brokers with 
high partition counts, maybeIncrementLeaderHW becomes expensive, with much of 
the time going to calling Partition.remoteReplicas which performs a toSet 
conversion. maybeIncrementLeaderHW should avoid generating any intermediate 
collections when calculating the new HWM.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (KAFKA-8875) CreateTopic API should check topic existence before replication factor

2019-09-11 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-8875.

Fix Version/s: 2.4.0
   Resolution: Fixed

> CreateTopic API should check topic existence before replication factor
> --
>
> Key: KAFKA-8875
> URL: https://issues.apache.org/jira/browse/KAFKA-8875
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: huxihx
>Priority: Major
> Fix For: 2.4.0
>
>
> If you try to create a topic and the replication factor cannot be satisfied, 
> Kafka will return `INVALID_REPLICATION_FACTOR`. If the topic already exists, 
> we should probably return `TOPIC_ALREADY_EXISTS` instead. You won't see this 
> problem if using TopicCommand because we check existence prior to creating 
> the topic.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8875) CreateTopic API should check topic existence before replication factor

2019-09-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16928010#comment-16928010
 ] 

ASF GitHub Bot commented on KAFKA-8875:
---

hachikuji commented on pull request #7298: KAFKA-8875:CreateTopic API should 
check topic existence before replic…
URL: https://github.com/apache/kafka/pull/7298
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> CreateTopic API should check topic existence before replication factor
> --
>
> Key: KAFKA-8875
> URL: https://issues.apache.org/jira/browse/KAFKA-8875
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: huxihx
>Priority: Major
>
> If you try to create a topic and the replication factor cannot be satisfied, 
> Kafka will return `INVALID_REPLICATION_FACTOR`. If the topic already exists, 
> we should probably return `TOPIC_ALREADY_EXISTS` instead. You won't see this 
> problem if using TopicCommand because we check existence prior to creating 
> the topic.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-7500) MirrorMaker 2.0 (KIP-382)

2019-09-11 Thread Ryanne Dolan (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927978#comment-16927978
 ] 

Ryanne Dolan commented on KAFKA-7500:
-

[~qihong] thanks for the questions.

> But couldn't find any consumer groups on dr2 related to consumer group 
> test1grp.

MM2 does not create consumer groups for you or attempt to keep them in sync -- 
it only produces checkpoints (dr1.checkpoints.internal) that encode the state 
of remote consumer groups. You must then do something with these checkpoints, 
depending on your use-case. The RemoteClusterUtils class will read checkpoints 
for you, which you can then use in interesting ways.

For example, you can use RemoteClusterUtils.translateOffsets() and the 
kafka-consumer-groups.sh --reset-offsets tool to create a consumer group in dr2 
based on MM2's checkpoints from dr1. Or, you can use RemoteClusterUtils in your 
Consumer code to failover/failback automatically. Both require a bit of code, 
but nothing too sophisticated.

Looking ahead a bit, this will be a ton easier when KIP-396 is merged 
(KAFKA-7689). Once consumer offsets can be controlled from the Admin API, it 
will be possible to consume checkpoints and update offsets directly. That will 
enable the behavior you were expecting.

> By the way, how to set up and run this in a Kafka connect cluster?

MM2's Connectors are just plain-old Connectors. You can run them with 
connect-standalone.sh or connect-distributed.sh as with any other Connector. To 
do so, you need a worker config and a connector config as usual. The worker 
config must include whatever client settings are required to connect to the 
_target_ cluster (i.e. bootstrap servers, security settings), since the Worker 
is what is actually producing downstream records. The connector configs, on the 
other hand, need connection settings for _both_ source and target clusters 
(e.g. source.cluster.bootstrap.servers, target.cluster.bootstrap.servers). The 
Connectors use both source and target clusters when syncing topic configuration 
etc.

There is an example Connector configuration here: 
https://github.com/apache/kafka/pull/6295#issuecomment-522074048

> MirrorMaker 2.0 (KIP-382)
> -
>
> Key: KAFKA-7500
> URL: https://issues.apache.org/jira/browse/KAFKA-7500
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect, mirrormaker
>Affects Versions: 2.4.0
>Reporter: Ryanne Dolan
>Assignee: Manikumar
>Priority: Major
>  Labels: pull-request-available, ready-to-commit
> Fix For: 2.4.0
>
> Attachments: Active-Active XDCR setup.png
>
>
> Implement a drop-in replacement for MirrorMaker leveraging the Connect 
> framework.
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0]
> [https://github.com/apache/kafka/pull/6295]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8869) Map taskConfigs in KafkaConfigBackingStore grows monotonically despite of task removals

2019-09-11 Thread Konstantine Karantasis (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantine Karantasis updated KAFKA-8869:
--
Priority: Minor  (was: Major)

> Map taskConfigs in KafkaConfigBackingStore grows monotonically despite of 
> task removals
> ---
>
> Key: KAFKA-8869
> URL: https://issues.apache.org/jira/browse/KAFKA-8869
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.3.0
>Reporter: Konstantine Karantasis
>Assignee: Konstantine Karantasis
>Priority: Minor
> Fix For: 2.3.1
>
>
> Investigation of https://issues.apache.org/jira/browse/KAFKA-8676 revealed 
> another issue: 
> a map in {{KafkaConfigBackingStore}} keeps growing despite of connectors and 
> tasks getting removed eventually.
> This bug does not affect directly rebalancing protocols but it'd good to 
> resolve and use in a way similar to how {{connectorConfigs}} is used. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8859) Refactor Cache-level Streams Metrics

2019-09-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8859?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927962#comment-16927962
 ] 

ASF GitHub Bot commented on KAFKA-8859:
---

cadonna commented on pull request #7323: KAFKA-8859: Expose built-in streams 
metrics version in `StreamsMetricsImpl`
URL: https://github.com/apache/kafka/pull/7323
 
 
   The streams config `built.in.metrics.version` is needed to add metrics in
   a backward-compatible way. However, not in every location where metrics are
   added a streams config is available to check `built.in.metrics.version`. 
Thus,
   the config value needs to be exposed through the `StreamsMetricsImpl` object.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Refactor Cache-level Streams Metrics
> 
>
> Key: KAFKA-8859
> URL: https://issues.apache.org/jira/browse/KAFKA-8859
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> Refactoring of cache-level Streams metrics according KIP-444.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8856) Add Streams Config for Backward-compatible Metrics

2019-09-11 Thread Bruno Cadonna (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927945#comment-16927945
 ] 

Bruno Cadonna commented on KAFKA-8856:
--

The version scheme was changed to 

name: built.in.metrics.version
type: Enum
values:

{"0.10.0-2.3", "latest"}
default: "latest"

> Add Streams Config for Backward-compatible Metrics
> --
>
> Key: KAFKA-8856
> URL: https://issues.apache.org/jira/browse/KAFKA-8856
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> With KIP-444 the tag names and names of streams metrics change. To allow 
> users having a grace period of changing their corresponding monitoring / 
> alerting eco-systems, a config shall be added that specifies which version of 
> the metrics names will be exposed.
> The definition of the new config is:
> name: built.in.metrics.version
> type: Enum
> values: {"0.10.0", "0.10.1", ... "2.3", "2.4"}
> default: "2.4" 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (KAFKA-8856) Add Streams Config for Backward-compatible Metrics

2019-09-11 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna resolved KAFKA-8856.
--
Resolution: Fixed

> Add Streams Config for Backward-compatible Metrics
> --
>
> Key: KAFKA-8856
> URL: https://issues.apache.org/jira/browse/KAFKA-8856
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> With KIP-444 the tag names and names of streams metrics change. To allow 
> users having a grace period of changing their corresponding monitoring / 
> alerting eco-systems, a config shall be added that specifies which version of 
> the metrics names will be exposed.
> The definition of the new config is:
> name: built.in.metrics.version
> type: Enum
> values: {"0.10.0", "0.10.1", ... "2.3", "2.4"}
> default: "2.4" 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (KAFKA-7658) Add KStream#toTable to the Streams DSL

2019-09-11 Thread Aishwarya Pradeep Kumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927881#comment-16927881
 ] 

Aishwarya Pradeep Kumar edited comment on KAFKA-7658 at 9/11/19 6:18 PM:
-

I do not have access to create a KIP, but i'd be happy to follow on the KIP and 
make any code changes necessary (if the proposal get accepted)

Overall i think it would make the DSL consistent


was (Author: ash26389):
I do not have access to create a KIP, but i'd be happy to follow on the KIP and 
make any code changes necessary (if the proposal get accepted)

> Add KStream#toTable to the Streams DSL
> --
>
> Key: KAFKA-7658
> URL: https://issues.apache.org/jira/browse/KAFKA-7658
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: needs-kip, newbie
>
> We'd like to add a new API to the KStream object of the Streams DSL:
> {code}
> KTable KStream.toTable()
> KTable KStream.toTable(Materialized)
> {code}
> The function re-interpret the event stream {{KStream}} as a changelog stream 
> {{KTable}}. Note that this should NOT be treated as a syntax-sugar as a dummy 
> {{KStream.reduce()}} function which always take the new value, as it has the 
> following difference: 
> 1) an aggregation operator of {{KStream}} is for aggregating a event stream 
> into an evolving table, which will drop null-values from the input event 
> stream; whereas a {{toTable}} function will completely change the semantics 
> of the input stream from event stream to changelog stream, and null-values 
> will still be serialized, and if the resulted bytes are also null they will 
> be interpreted as "deletes" to the materialized KTable (i.e. tombstones in 
> the changelog stream).
> 2) the aggregation result {{KTable}} will always be materialized, whereas 
> {{toTable}} resulted KTable may only be materialized if the overloaded 
> function with Materialized is used (and if optimization is turned on it may 
> still be only logically materialized if the queryable name is not set).
> Therefore, for users who want to take a event stream into a changelog stream 
> (no matter why they cannot read from the source topic as a changelog stream 
> {{KTable}} at the beginning), they should be using this new API instead of 
> the dummy reduction function.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-7658) Add KStream#toTable to the Streams DSL

2019-09-11 Thread Aishwarya Pradeep Kumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927881#comment-16927881
 ] 

Aishwarya Pradeep Kumar commented on KAFKA-7658:


I do not have access to create a KIP, but i'd be happy to follow on the KIP and 
make any code changes necessary (if the proposal get accepted)

> Add KStream#toTable to the Streams DSL
> --
>
> Key: KAFKA-7658
> URL: https://issues.apache.org/jira/browse/KAFKA-7658
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>  Labels: needs-kip, newbie
>
> We'd like to add a new API to the KStream object of the Streams DSL:
> {code}
> KTable KStream.toTable()
> KTable KStream.toTable(Materialized)
> {code}
> The function re-interpret the event stream {{KStream}} as a changelog stream 
> {{KTable}}. Note that this should NOT be treated as a syntax-sugar as a dummy 
> {{KStream.reduce()}} function which always take the new value, as it has the 
> following difference: 
> 1) an aggregation operator of {{KStream}} is for aggregating a event stream 
> into an evolving table, which will drop null-values from the input event 
> stream; whereas a {{toTable}} function will completely change the semantics 
> of the input stream from event stream to changelog stream, and null-values 
> will still be serialized, and if the resulted bytes are also null they will 
> be interpreted as "deletes" to the materialized KTable (i.e. tombstones in 
> the changelog stream).
> 2) the aggregation result {{KTable}} will always be materialized, whereas 
> {{toTable}} resulted KTable may only be materialized if the overloaded 
> function with Materialized is used (and if optimization is turned on it may 
> still be only logically materialized if the queryable name is not set).
> Therefore, for users who want to take a event stream into a changelog stream 
> (no matter why they cannot read from the source topic as a changelog stream 
> {{KTable}} at the beginning), they should be using this new API instead of 
> the dummy reduction function.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (KAFKA-7500) MirrorMaker 2.0 (KIP-382)

2019-09-11 Thread Qihong Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927863#comment-16927863
 ] 

Qihong Chen edited comment on KAFKA-7500 at 9/11/19 6:13 PM:
-

[~ryannedolan] Thanks for all your efforts on MM2. Appreciate your input on the 
following questions.

I'm trying to replicate topics from one cluster to another, include topic data 
and related consumers' offset. But I only see the topic data was replicated, 
not consumer offsets.

Here's my mm2.properties
{code:java}
clusters = dr1, dr2

dr1.bootstrap.servers = 10.1.0.4:9092,10.1.0.5:9092,10.1.0.6:9092
dr2.bootstrap.servers = 10.2.0.4:9092,10.2.0.5:9092,10.2.0.6:9092

# only allow replication dr1 -> dr2
dr1->dr2.enabled = true
dr1->dr2.topics = test.*
dr1->dr2.groups = test.*
dr1->dr2.emit.heartbeats.enabled = false

dr2->dr1.enabled = false
dr2->dr1.emit.heartbeats.enabled = false
{code}
Here's how I started MM2 cluster (dr2 as the nearby cluster)
{code:java}
nohup bin/connect-mirror-maker.sh mm2.properties --clusters dr2 > mm2.log 2>&1 &
{code}
On dr1, there's topic *test1*, and consumer group *test1grp* for topic test1.

On dr2, I found following topics
{code:java}
__consumer_offsets
dr1.checkpoints.internal
dr1.test1
heartbeats
mm2-configs.dr1.internal
mm2-offsets.dr1.internal
mm2-status.dr1.internal
 {code}
But couldn't find any consumer groups on dr2 related to consumer group 
*test1grp*.

Could you please let me know in detail how to migrate consumer group *test1grp* 
from dr1 to dr2?, i.e. what command(s) need to run to set up the offset for 
*test1grp* on dr2 before consume topic *dr1.test1* ?

 

By the way, how to set up and run this in a Kafka connect cluster? i.e., how to 
set up 
 MirrorSourceConnector, MirrorCheckpointConnector in a connect cluster? Is 
there document about this?
  


was (Author: qihong):
[~ryannedolan] Thanks for all your efforts on MM2. Appreciate your input on the 
following questions.

I'm trying to replicate topics from one cluster to another, include topic data 
and related consumers' offset. But I only see the topic data was replicated, 
not consumer offsets.

Here's my mm2.properties
{code:java}
clusters = dr1, dr2

dr1.bootstrap.servers = 10.1.0.4:9092,10.1.0.5:9092,10.1.0.6:9092
dr2.bootstrap.servers = 10.2.0.4:9092,10.2.0.5:9092,10.2.0.6:9092

# only allow replication dr1 -> dr2
dr1->dr2.enabled = true
dr1->dr2.topics = test.*
dr1->dr2.groups = test.*
dr1->dr2.emit.heartbeats.enabled = false

dr2->dr1.enabled = false
dr2->dr1.emit.heartbeats.enabled = false
{code}
Here's how I started MM2 cluster (dr2 as the nearby cluster)
{code:java}
nohup bin/connect-mirror-maker.sh mm2.properties --clusters dr2 > mm2.log 2>&1 &
{code}
On dr1, there's topic *test1*, and consumer group *test1grp* for topic test1.

On dr2, I found following topics
{code:java}
__consumer_offsets
dr1.checkpoints.internal
dr1.test1
heartbeats
mm2-configs.dr1.internal
mm2-offsets.dr1.internal
mm2-status.dr1.internal
 {code}
But couldn't find any consumer groups on dr2 related consumer group *test1grp*.


 Could you please let me know in detail how to migrate consumer group 
*test1grp* from dr1 to dr2?, i.e. what command(s) need to run to set up the 
offset for *test1grp* on dr2 before consume topic *dr1.test1* ?

 

By the way, how to set up and run this in a Kafka connect cluster? i.e., how to 
set up 
 MirrorSourceConnector, MirrorCheckpointConnector in a connect cluster? Is 
there document about this?
  

> MirrorMaker 2.0 (KIP-382)
> -
>
> Key: KAFKA-7500
> URL: https://issues.apache.org/jira/browse/KAFKA-7500
> Project: Kafka
>  Issue Type: New Feature
>  Components: KafkaConnect, mirrormaker
>Affects Versions: 2.4.0
>Reporter: Ryanne Dolan
>Assignee: Manikumar
>Priority: Major
>  Labels: pull-request-available, ready-to-commit
> Fix For: 2.4.0
>
> Attachments: Active-Active XDCR setup.png
>
>
> Implement a drop-in replacement for MirrorMaker leveraging the Connect 
> framework.
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0]
> [https://github.com/apache/kafka/pull/6295]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8856) Add Streams Config for Backward-compatible Metrics

2019-09-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927867#comment-16927867
 ] 

ASF GitHub Bot commented on KAFKA-8856:
---

guozhangwang commented on pull request #7279: KAFKA-8856: Add Streams config 
for backward-compatible metrics
URL: https://github.com/apache/kafka/pull/7279
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add Streams Config for Backward-compatible Metrics
> --
>
> Key: KAFKA-8856
> URL: https://issues.apache.org/jira/browse/KAFKA-8856
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
>
> With KIP-444 the tag names and names of streams metrics change. To allow 
> users having a grace period of changing their corresponding monitoring / 
> alerting eco-systems, a config shall be added that specifies which version of 
> the metrics names will be exposed.
> The definition of the new config is:
> name: built.in.metrics.version
> type: Enum
> values: {"0.10.0", "0.10.1", ... "2.3", "2.4"}
> default: "2.4" 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (KAFKA-8886) Make Authorizer create/delete methods asynchronous

2019-09-11 Thread Rajini Sivaram (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-8886.
---
  Reviewer: Manikumar
Resolution: Fixed

> Make Authorizer create/delete methods asynchronous
> --
>
> Key: KAFKA-8886
> URL: https://issues.apache.org/jira/browse/KAFKA-8886
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.4.0
>
>
> As discussed on the mailing list, createAcls and deleteAcls should be 
> asynchronous to avoid blocking request threads when updates are made to 
> non-ZK based stores which may block for potentially long durations.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8888) Possible to get null IQ value if reading from wrong instance

2019-09-11 Thread Guozhang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927855#comment-16927855
 ] 

Guozhang Wang commented on KAFKA-:
--

Just to clarify, the common usage pattern is

{code}
Store store = streams.store(...);
store.get(...)
{code}

In the first line, we would check if there are any stores for the given store 
name in the current assigned tasks across all threads already and throw 
TaskMigratedException when possible., but that would pass as long as at least 
one task containing the corresponding store name is in one of the threads.

So in the `get` call, we should probably do another check similar to 
`metadataForKey` and throw TaskMigratedException as well.


> Possible to get null IQ value if reading from wrong instance
> 
>
> Key: KAFKA-
> URL: https://issues.apache.org/jira/browse/KAFKA-
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Chris Pettitt
>Priority: Minor
>
> Currently if you try to read a key from the wrong instance (i.e. the 
> partition that the key is mapped to is owned by a different instance) you get 
> null and no indication you're reading from the wrong instance. It would be 
> nice to get some indication that the instance is the wrong instance and not 
> that the value for the key is null.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8898) if there is no message for poll, kafka consumer also apply memory

2019-09-11 Thread Rachana Prajapati (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927823#comment-16927823
 ] 

Rachana Prajapati commented on KAFKA-8898:
--

[~linking12] I am new to Kafka and want to start contributing. Can you please 
point me to the actual github code and elaborate more on the issue. Thanks.

> if there is no message for poll, kafka consumer also apply memory
> -
>
> Key: KAFKA-8898
> URL: https://issues.apache.org/jira/browse/KAFKA-8898
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.1.1
>Reporter: linking12
>Priority: Blocker
>  Labels: performance
>
> when poll message, but there is no record,but consumer will apply 1000 byte 
> memory;
> fetched = *new* HashMap<>() is not good idea, it will apply memory in heap 
> but there is no message;
> I think fetched = *new* HashMap<>() will appear in records exist;
>  
> ```
>   *public* Map>> fetchedRecords() {
>         Map>> fetched = *new* 
> HashMap<>();
>         *int* recordsRemaining = maxPollRecords;
> ```



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8882) It's not possible to restart Kafka Streams using StateListener

2019-09-11 Thread John Roesler (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927791#comment-16927791
 ] 

John Roesler commented on KAFKA-8882:
-

Hey [~cjub],

Thanks for the report. This does indeed not sound right to me. Can you check 
the logs and see what is reported for the state transitions? I.e., from this 
log: `log.info("State transition from {} to {}", oldState, newState)`.

Also, can you check the logs to see if you fall into this case in close():

{noformat}
if (!setState(State.PENDING_SHUTDOWN)) {
// if transition failed, it means it was either in PENDING_SHUTDOWN
// or NOT_RUNNING already; just check that all threads have been 
stopped
log.info("Already in the pending shutdown state, wait to complete 
shutdown");
{noformat}

One side note, I would clean up _after_ it's not running instead of just before 
close.

Note: you could actually just wait until close() returns, since it should never 
return until Streams is completely stopped:

{noformat}
# this is the end of the close() method.
if (waitOnState(State.NOT_RUNNING, timeoutMs)) {
log.info("Streams client stopped completely");
return true;
} else {
log.info("Streams client cannot stop completely within the 
timeout");
return false;
}
{noformat}

Regards,
-John


> It's not possible to restart Kafka Streams using StateListener
> --
>
> Key: KAFKA-8882
> URL: https://issues.apache.org/jira/browse/KAFKA-8882
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.1
> Environment: Linux, Windows
>Reporter: Jakub
>Priority: Major
>
> Upon problems with connecting to a Kafka Cluster services using Kafka Streams 
> stop working with the following error message:
> {code:java}
> Encountered the following unexpected Kafka exception during processing, this 
> usually indicate Streams internal errors (...)
> Caused by: org.apache.kafka.common.errors.TimeoutException: Aborted due to 
> timeout
> (...)
> State transition from PENDING_SHUTDOWN to DEAD
> (...)
> All stream threads have died. The instance will be in error state and should 
> be closed.
> {code}
>  
> We tried to use a StateListener to automatically detect and work around this 
> problem. 
>  However, this approach doesn't seem to work correctly:
>  # KafkaStreams correctly transitions from status Error to Pending Shutdown, 
> but then it stays in this status forever.
>  # Occasionally, after registering a listener the status doesn't even change 
> to Error.
>  
> {code:java}
> kafkaStreams.setStateListener(new StateListener() {
>   public void onChange(State stateNew, State stateOld) {
>   if (stateNew == State.ERROR) {
>   kafkaStreams.cleanUp();
>   kafkaStreams.close();
>   
>   } else if (stateNew == State.PENDING_SHUTDOWN) {
> 
>   // this message is displayed, and then nothig else 
> happens
>   LOGGER.info("State is PENDING_SHUTDOWN");
>   
>   } else if (stateNew == State.NOT_RUNNING) {
>   // it never gets here
>   kafkaStreams = createKafkaStreams();
>   kafkaStreams.start();
>   }
>   }
> });
> {code}
>  
> Surprisingly, restarting KafkaStreams outside of a listener works fine.
>  I'm happy to provide more details if required.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8888) Possible to get null IQ value if reading from wrong instance

2019-09-11 Thread John Roesler (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927737#comment-16927737
 ] 

John Roesler commented on KAFKA-:
-

Thanks for the report, [~cpettitt-confluent], I agree that we would do well to 
offer stricter guarantees about this.

> Possible to get null IQ value if reading from wrong instance
> 
>
> Key: KAFKA-
> URL: https://issues.apache.org/jira/browse/KAFKA-
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Chris Pettitt
>Priority: Minor
>
> Currently if you try to read a key from the wrong instance (i.e. the 
> partition that the key is mapped to is owned by a different instance) you get 
> null and no indication you're reading from the wrong instance. It would be 
> nice to get some indication that the instance is the wrong instance and not 
> that the value for the key is null.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Commented] (KAFKA-8886) Make Authorizer create/delete methods asynchronous

2019-09-11 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927712#comment-16927712
 ] 

ASF GitHub Bot commented on KAFKA-8886:
---

rajinisivaram commented on pull request #7316: KAFKA-8886; Make Authorizer 
create/delete methods asynchronous
URL: https://github.com/apache/kafka/pull/7316
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Make Authorizer create/delete methods asynchronous
> --
>
> Key: KAFKA-8886
> URL: https://issues.apache.org/jira/browse/KAFKA-8886
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>Priority: Major
> Fix For: 2.4.0
>
>
> As discussed on the mailing list, createAcls and deleteAcls should be 
> asynchronous to avoid blocking request threads when updates are made to 
> non-ZK based stores which may block for potentially long durations.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-7941) Connect KafkaBasedLog work thread terminates when getting offsets fails because broker is unavailable

2019-09-11 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch updated KAFKA-7941:
-
Component/s: KafkaConnect

> Connect KafkaBasedLog work thread terminates when getting offsets fails 
> because broker is unavailable
> -
>
> Key: KAFKA-7941
> URL: https://issues.apache.org/jira/browse/KAFKA-7941
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Paul Whalen
>Assignee: Paul Whalen
>Priority: Minor
> Fix For: 2.0.2, 2.1.2, 2.2.2, 2.4.0, 2.3.1
>
>
> My team has run into this Connect bug regularly in the last six months while 
> doing infrastructure maintenance that causes intermittent broker availability 
> issues.  I'm a little surprised it exists given how routinely it affects us, 
> so perhaps someone in the know can point out if our setup is somehow just 
> incorrect.  My team is running 2.0.0 on both the broker and client, though 
> from what I can tell from reading the code, the issue continues to exist 
> through 2.2; at least, I was able to write a failing unit test that I believe 
> reproduces it.
> When a {{KafkaBasedLog}} worker thread in the Connect runtime calls 
> {{readLogToEnd}} and brokers are unavailable, the {{TimeoutException}} from 
> the consumer {{endOffsets}} call is uncaught all the way up to the top level 
> {{catch (Throwable t)}}, effectively killing the thread until restarting 
> Connect.  The result is Connect stops functioning entirely, with no 
> indication except for that log line - tasks still show as running.
> The proposed fix is to simply catch and log the {{TimeoutException}}, 
> allowing the worker thread to retry forever.
> Alternatively, perhaps there is not an expectation that Connect should be 
> able to recover following broker unavailability, though that would be 
> disappointing.  I would at least hope hope for a louder failure then the 
> single {{ERROR}} log.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-4893) async topic deletion conflicts with max topic length

2019-09-11 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch updated KAFKA-4893:
-
Component/s: log

> async topic deletion conflicts with max topic length
> 
>
> Key: KAFKA-4893
> URL: https://issues.apache.org/jira/browse/KAFKA-4893
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Onur Karaman
>Assignee: Colin P. McCabe
>Priority: Major
> Fix For: 2.3.0, 2.1.2, 2.2.2
>
>
> As per the 
> [documentation|http://kafka.apache.org/documentation/#basic_ops_add_topic], 
> topics can be only 249 characters long to line up with typical filesystem 
> limitations:
> {quote}
> Each sharded partition log is placed into its own folder under the Kafka log 
> directory. The name of such folders consists of the topic name, appended by a 
> dash (\-) and the partition id. Since a typical folder name can not be over 
> 255 characters long, there will be a limitation on the length of topic names. 
> We assume the number of partitions will not ever be above 100,000. Therefore, 
> topic names cannot be longer than 249 characters. This leaves just enough 
> room in the folder name for a dash and a potentially 5 digit long partition 
> id.
> {quote}
> {{kafka.common.Topic.maxNameLength}} is set to 249 and is used during 
> validation.
> This limit ends up not being quite right since topic deletion ends up 
> renaming the directory to the form {{topic-partition.uniqueId-delete}} as can 
> be seen in {{LogManager.asyncDelete}}:
> {code}
> val dirName = new StringBuilder(removedLog.name)
>   .append(".")
>   
> .append(java.util.UUID.randomUUID.toString.replaceAll("-",""))
>   .append(Log.DeleteDirSuffix)
>   .toString()
> {code}
> So the unique id and "-delete" suffix end up hogging some of the characters. 
> Deleting a long-named topic results in a log message such as the following:
> {code}
> kafka.common.KafkaStorageException: Failed to rename log directory from 
> /tmp/kafka-logs0/0-0
>  to 
> /tmp/kafka-logs0/0-0.797bba3fb2464729840f87769243edbb-delete
>   at kafka.log.LogManager.asyncDelete(LogManager.scala:439)
>   at 
> kafka.cluster.Partition$$anonfun$delete$1.apply$mcV$sp(Partition.scala:142)
>   at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:137)
>   at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:137)
>   at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
>   at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:221)
>   at kafka.cluster.Partition.delete(Partition.scala:137)
>   at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:230)
>   at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:260)
>   at 
> kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:259)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:259)
>   at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:174)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:86)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:64)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The topic after this point still exists but has Leader set to -1 and the 
> controller recognizes the topic completion as incomplete (the topic znode is 
> still in /admin/delete_topics).
> I don't believe linkedin has any topic name this long but I'm making the 
> ticket in case anyone runs into this problem.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8570) Downconversion could fail when log contains out of order message formats

2019-09-11 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch updated KAFKA-8570:
-
Component/s: clients

> Downconversion could fail when log contains out of order message formats
> 
>
> Key: KAFKA-8570
> URL: https://issues.apache.org/jira/browse/KAFKA-8570
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Reporter: Dhruvil Shah
>Assignee: Dhruvil Shah
>Priority: Major
> Fix For: 1.1.2, 2.0.2, 2.1.2, 2.2.2, 2.3.1
>
>
> When the log contains out of order message formats (for example a v2 message 
> followed by a v1 message), it is possible for down-conversion to fail in 
> certain scenarios where batches compressed and greater than 1kB in size. 
> Down-conversion fails with a stack like the following:
> java.lang.IllegalArgumentException
> at java.nio.Buffer.limit(Buffer.java:275)
> at 
> org.apache.kafka.common.record.FileLogInputStream$FileChannelRecordBatch.writeTo(FileLogInputStream.java:176)
> at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:107)
> at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:242)



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Updated] (KAFKA-8229) Connect Sink Task updates nextCommit when commitRequest is true

2019-09-11 Thread Randall Hauch (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Randall Hauch updated KAFKA-8229:
-
Component/s: KafkaConnect

> Connect Sink Task updates nextCommit when commitRequest is true
> ---
>
> Key: KAFKA-8229
> URL: https://issues.apache.org/jira/browse/KAFKA-8229
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Reporter: Scott Reynolds
>Priority: Major
> Fix For: 2.3.0, 2.2.2
>
>
> Today, when a WorkerSinkTask uses context.requestCommit(), the next call to 
> iteration will cause the commit to happen. As part of the commit execution it 
> will also change the nextCommit milliseconds.
> This creates some weird behaviors when a SinkTask calls context.requestCommit 
> multiple times. In our case, we were calling requestCommit when the number of 
> kafka records we processed exceed a threshold. This resulted in the 
> nextCommit being several days in the future and caused it to only commit when 
> the record threshold was reached.
> We expected the task to commit when the record threshold was reached OR when 
> the timer went off.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Comment Edited] (KAFKA-8883) Kafka Consumer continuously echoing INFO messages Seeking to offset 0 for partition

2019-09-11 Thread Veerabhadra Rao Mallavarapu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927588#comment-16927588
 ] 

Veerabhadra Rao Mallavarapu edited comment on KAFKA-8883 at 9/11/19 1:53 PM:
-

It is filling log files very fast and log file size is growing too fast.  Very 
difficult to analyze actual issues due to rapid rolling of log files due to 
continuous logging of this information.


was (Author: veeru.mallavarapu):
It is filling log files very fast and log file size is growing too fast.

> Kafka Consumer continuously echoing INFO messages Seeking to offset 0 for 
> partition
> ---
>
> Key: KAFKA-8883
> URL: https://issues.apache.org/jira/browse/KAFKA-8883
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 2.3.0
>Reporter: Veerabhadra Rao Mallavarapu
>Priority: Critical
> Fix For: 2.3.0
>
> Attachments: kafka_230.txt
>
>
> KafkaConsumer continously echoing INFO messages which is leading to growing 
> my log file size to very huge within minutes.  Please do the needful asap. 
> Earlier it was a debug message, in latest version 2.3.0, It is changed to 
> INFO.
> Please refer KafkaConsumer.java line 1545
>  
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-3, groupId=reporting-agent - ProgramLikeCountSplitter - 0] 
> Seeking to offset 0 for partition ProgramLikeCountSplitter-0
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-13, groupId=reporting-agent - ProgramCommented - 0] Seeking 
> to offset 0 for partition ProgramCommented-0
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-2, groupId=reporting-agent - ProgramCommentCountSplitter - 
> 0] Seeking to offset 0 for partition ProgramCommentCountSplitter-0
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-7, groupId=reporting-agent - MigrateUserView - 0] Seeking 
> to offset 0 for partition MigrateUserView-0
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-11, groupId=reporting-agent - MigrateBroadcastSummary - 0] 
> Seeking to offset 0 for partition MigrateBroadcastSummary-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-9, groupId=reporting-agent - ElogPlaybackRecord - 0] 
> Seeking to offset 0 for partition ElogPlaybackRecord-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-17, groupId=reporting-agent - PortalSearchReport - 0] 
> Seeking to offset 10 for partition PortalSearchReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-10, groupId=reporting-agent - ProgramLikeCount - 0] Seeking 
> to offset 0 for partition ProgramLikeCount-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-19, groupId=reporting-agent - UserLoginReport - 0] Seeking 
> to offset 4 for partition UserLoginReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-14, groupId=reporting-agent - ChannelChangeReport - 0] 
> Seeking to offset 0 for partition ChannelChangeReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-20, groupId=reporting-agent - ProgramViewCount - 0] Seeking 
> to offset 0 for partition ProgramViewCount-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-5, groupId=reporting-agent - BroadcastSummary - 0] Seeking 
> to offset 0 for partition BroadcastSummary-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-4, groupId=reporting-agent - UserView - 0] Seeking to 
> offset 0 for partition UserView-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-1, groupId=reporting-agent - ProgramViewed - 0] Seeking to 
> offset 0 for partition ProgramViewed-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-18, groupId=reporting-agent - ProgramLiked - 0] Seeking to 
> offset 0 for partition ProgramLiked-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-21, groupId=reporting-agent - ProgramChangeReport - 0] 
> Seeking to offset 0 for partition ProgramChangeReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-16, groupId=reporting-agent - ProgramCommentCount - 0] 
> Seeking to offset 0 for partition ProgramCommentCount-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-8, groupId=reporting-agent - ChannelViewReport - 0] Seeking 
> to offset 0 for partition ChannelViewReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> 

[jira] [Commented] (KAFKA-8883) Kafka Consumer continuously echoing INFO messages Seeking to offset 0 for partition

2019-09-11 Thread Veerabhadra Rao Mallavarapu (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927588#comment-16927588
 ] 

Veerabhadra Rao Mallavarapu commented on KAFKA-8883:


It is filling log files very fast and log file size is growing too fast.

> Kafka Consumer continuously echoing INFO messages Seeking to offset 0 for 
> partition
> ---
>
> Key: KAFKA-8883
> URL: https://issues.apache.org/jira/browse/KAFKA-8883
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 2.3.0
>Reporter: Veerabhadra Rao Mallavarapu
>Priority: Critical
> Fix For: 2.3.0
>
> Attachments: kafka_230.txt
>
>
> KafkaConsumer continously echoing INFO messages which is leading to growing 
> my log file size to very huge within minutes.  Please do the needful asap. 
> Earlier it was a debug message, in latest version 2.3.0, It is changed to 
> INFO.
> Please refer KafkaConsumer.java line 1545
>  
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-3, groupId=reporting-agent - ProgramLikeCountSplitter - 0] 
> Seeking to offset 0 for partition ProgramLikeCountSplitter-0
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-13, groupId=reporting-agent - ProgramCommented - 0] Seeking 
> to offset 0 for partition ProgramCommented-0
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-2, groupId=reporting-agent - ProgramCommentCountSplitter - 
> 0] Seeking to offset 0 for partition ProgramCommentCountSplitter-0
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-7, groupId=reporting-agent - MigrateUserView - 0] Seeking 
> to offset 0 for partition MigrateUserView-0
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-11, groupId=reporting-agent - MigrateBroadcastSummary - 0] 
> Seeking to offset 0 for partition MigrateBroadcastSummary-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-9, groupId=reporting-agent - ElogPlaybackRecord - 0] 
> Seeking to offset 0 for partition ElogPlaybackRecord-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-17, groupId=reporting-agent - PortalSearchReport - 0] 
> Seeking to offset 10 for partition PortalSearchReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-10, groupId=reporting-agent - ProgramLikeCount - 0] Seeking 
> to offset 0 for partition ProgramLikeCount-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-19, groupId=reporting-agent - UserLoginReport - 0] Seeking 
> to offset 4 for partition UserLoginReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-14, groupId=reporting-agent - ChannelChangeReport - 0] 
> Seeking to offset 0 for partition ChannelChangeReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-20, groupId=reporting-agent - ProgramViewCount - 0] Seeking 
> to offset 0 for partition ProgramViewCount-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-5, groupId=reporting-agent - BroadcastSummary - 0] Seeking 
> to offset 0 for partition BroadcastSummary-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-4, groupId=reporting-agent - UserView - 0] Seeking to 
> offset 0 for partition UserView-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-1, groupId=reporting-agent - ProgramViewed - 0] Seeking to 
> offset 0 for partition ProgramViewed-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-18, groupId=reporting-agent - ProgramLiked - 0] Seeking to 
> offset 0 for partition ProgramLiked-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-21, groupId=reporting-agent - ProgramChangeReport - 0] 
> Seeking to offset 0 for partition ProgramChangeReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-16, groupId=reporting-agent - ProgramCommentCount - 0] 
> Seeking to offset 0 for partition ProgramCommentCount-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-8, groupId=reporting-agent - ChannelViewReport - 0] Seeking 
> to offset 0 for partition ChannelViewReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-15, groupId=reporting-agent - ProgramFavoriteCountSplitter 
> - 0] Seeking to offset 0 for partition ProgramFavoriteCountSplitter-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-6, groupId=reporting-agent - ProgramViewingSplitter - 0] 
> Seeking 

[jira] [Updated] (KAFKA-8883) Kafka Consumer continuously echoing INFO messages Seeking to offset 0 for partition

2019-09-11 Thread Veerabhadra Rao Mallavarapu (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Veerabhadra Rao Mallavarapu updated KAFKA-8883:
---
Component/s: (was: KafkaConnect)
 producer 
 consumer
   Priority: Critical  (was: Major)

> Kafka Consumer continuously echoing INFO messages Seeking to offset 0 for 
> partition
> ---
>
> Key: KAFKA-8883
> URL: https://issues.apache.org/jira/browse/KAFKA-8883
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 2.3.0
>Reporter: Veerabhadra Rao Mallavarapu
>Priority: Critical
> Fix For: 2.3.0
>
> Attachments: kafka_230.txt
>
>
> KafkaConsumer continously echoing INFO messages which is leading to growing 
> my log file size to very huge within minutes.  Please do the needful asap. 
> Earlier it was a debug message, in latest version 2.3.0, It is changed to 
> INFO.
> Please refer KafkaConsumer.java line 1545
>  
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-3, groupId=reporting-agent - ProgramLikeCountSplitter - 0] 
> Seeking to offset 0 for partition ProgramLikeCountSplitter-0
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-13, groupId=reporting-agent - ProgramCommented - 0] Seeking 
> to offset 0 for partition ProgramCommented-0
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-2, groupId=reporting-agent - ProgramCommentCountSplitter - 
> 0] Seeking to offset 0 for partition ProgramCommentCountSplitter-0
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-7, groupId=reporting-agent - MigrateUserView - 0] Seeking 
> to offset 0 for partition MigrateUserView-0
> 09-06@06:27:38 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-11, groupId=reporting-agent - MigrateBroadcastSummary - 0] 
> Seeking to offset 0 for partition MigrateBroadcastSummary-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-9, groupId=reporting-agent - ElogPlaybackRecord - 0] 
> Seeking to offset 0 for partition ElogPlaybackRecord-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-17, groupId=reporting-agent - PortalSearchReport - 0] 
> Seeking to offset 10 for partition PortalSearchReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-10, groupId=reporting-agent - ProgramLikeCount - 0] Seeking 
> to offset 0 for partition ProgramLikeCount-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-19, groupId=reporting-agent - UserLoginReport - 0] Seeking 
> to offset 4 for partition UserLoginReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-14, groupId=reporting-agent - ChannelChangeReport - 0] 
> Seeking to offset 0 for partition ChannelChangeReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-20, groupId=reporting-agent - ProgramViewCount - 0] Seeking 
> to offset 0 for partition ProgramViewCount-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-5, groupId=reporting-agent - BroadcastSummary - 0] Seeking 
> to offset 0 for partition BroadcastSummary-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-4, groupId=reporting-agent - UserView - 0] Seeking to 
> offset 0 for partition UserView-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-1, groupId=reporting-agent - ProgramViewed - 0] Seeking to 
> offset 0 for partition ProgramViewed-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-18, groupId=reporting-agent - ProgramLiked - 0] Seeking to 
> offset 0 for partition ProgramLiked-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-21, groupId=reporting-agent - ProgramChangeReport - 0] 
> Seeking to offset 0 for partition ProgramChangeReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-16, groupId=reporting-agent - ProgramCommentCount - 0] 
> Seeking to offset 0 for partition ProgramCommentCount-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-8, groupId=reporting-agent - ChannelViewReport - 0] Seeking 
> to offset 0 for partition ChannelViewReport-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-15, groupId=reporting-agent - ProgramFavoriteCountSplitter 
> - 0] Seeking to offset 0 for partition ProgramFavoriteCountSplitter-0
> 09-06@06:27:39 INFO (KafkaConsumer.java:1545) - [Consumer 
> clientId=consumer-6, groupId=reporting-agent - 

[jira] [Updated] (KAFKA-8898) if there is no message for poll, kafka consumer also apply memory

2019-09-11 Thread linking12 (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

linking12 updated KAFKA-8898:
-
Summary: if there is no message for poll, kafka consumer also apply memory  
(was: if there is no message for poll, kafka consumer apply memory)

> if there is no message for poll, kafka consumer also apply memory
> -
>
> Key: KAFKA-8898
> URL: https://issues.apache.org/jira/browse/KAFKA-8898
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 2.1.1
>Reporter: linking12
>Priority: Blocker
>  Labels: performance
>
> when poll message, but there is no record,but consumer will apply 1000 byte 
> memory;
> fetched = *new* HashMap<>() is not good idea, it will apply memory in heap 
> but there is no message;
> I think fetched = *new* HashMap<>() will appear in records exist;
>  
> ```
>   *public* Map>> fetchedRecords() {
>         Map>> fetched = *new* 
> HashMap<>();
>         *int* recordsRemaining = maxPollRecords;
> ```



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (KAFKA-8898) if there is no message for poll, kafka consumer apply memory

2019-09-11 Thread linking12 (Jira)
linking12 created KAFKA-8898:


 Summary: if there is no message for poll, kafka consumer apply 
memory
 Key: KAFKA-8898
 URL: https://issues.apache.org/jira/browse/KAFKA-8898
 Project: Kafka
  Issue Type: Bug
  Components: consumer
Affects Versions: 2.1.1
Reporter: linking12


when poll message, but there is no record,but consumer will apply 1000 byte 
memory;

fetched = *new* HashMap<>() is not good idea, it will apply memory in heap but 
there is no message;

I think fetched = *new* HashMap<>() will appear in records exist;

 

```

  *public* Map>> fetchedRecords() {

        Map>> fetched = *new* 
HashMap<>();

        *int* recordsRemaining = maxPollRecords;

```



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (KAFKA-8897) Increase Version of RocksDB

2019-09-11 Thread Bruno Cadonna (Jira)
Bruno Cadonna created KAFKA-8897:


 Summary: Increase Version of RocksDB
 Key: KAFKA-8897
 URL: https://issues.apache.org/jira/browse/KAFKA-8897
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bruno Cadonna


A higher version (6+) of RocksDB is needed for some metrics specified in 
KIP-471. 



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Assigned] (KAFKA-8855) Collect and Expose Client's Name and Version in the Brokers

2019-09-11 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot reassigned KAFKA-8855:
--

Assignee: David Jacot

> Collect and Expose Client's Name and Version in the Brokers
> ---
>
> Key: KAFKA-8855
> URL: https://issues.apache.org/jira/browse/KAFKA-8855
> Project: Kafka
>  Issue Type: Improvement
>Reporter: David Jacot
>Assignee: David Jacot
>Priority: Major
>
> Implements KIP-511 as documented here: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-511%3A+Collect+and+Expose+Client%27s+Name+and+Version+in+the+Brokers]



--
This message was sent by Atlassian Jira
(v8.3.2#803003)