[jira] [Updated] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-26 Thread Xiaolin Jia (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaolin Jia updated KAFKA-8289:
---
Fix Version/s: 2.2.1

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.2.1
>
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107476398},k=B,v=11
> 2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107501398},k=B,v=12
> 2019-04-24 20:05:26.075  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:44.598

[jira] [Commented] (KAFKA-8172) FileSystemException: The process cannot access the file because it is being used by another process

2019-04-26 Thread Dongjoon Hyun (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827447#comment-16827447
 ] 

Dongjoon Hyun commented on KAFKA-8172:
--

Can we remove `2.2.0` from the `Fix version` because 2.2.0 is already released?

> FileSystemException: The process cannot access the file because it is being 
> used by another process
> ---
>
> Key: KAFKA-8172
> URL: https://issues.apache.org/jira/browse/KAFKA-8172
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.1.1, 2.2.0, 2.1.1
> Environment: Windows
>Reporter: Bharat Kondeti
>Priority: Major
> Fix For: 1.1.1, 2.2.0, 2.1.1
>
> Attachments: 
> 0001-Fix-to-close-the-handlers-before-renaming-files-and-.patch
>
>
> Fix to close file handlers before renaming files / directories and open them 
> back if required
> Following are the file renaming scenarios:
>  * Files are renamed to .deleted so they can be deleted
>  * .cleaned files are renamed to .swap as part of Log.replaceSegments flow
>  * .swap files are renamed to original files as part of Log.replaceSegments 
> flow
> Following are the folder renaming scenarios:
>  * When a topic is marked for deletion, folder is renamed
>  * As part of replacing current logs with future logs in LogManager
> In above scenarios, if file handles are not closed, we get file access 
> violation exception
> Idea is to close the logs and file segments before doing a rename and open 
> them back up if required.
> *Segments Deletion Scenario*
> [2018-06-01 17:00:07,566] ERROR Error while deleting segments for test4-1 in 
> dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel)
> java.nio.file.FileSystemException: 
> D:\data\Kafka\kafka-logs\test4-1\.log -> 
> D:\data\Kafka\kafka-logs\test4-1\.log.deleted: The 
> process cannot access the file because it is being used by another process.
> at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>  at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>  at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>  at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>  at java.nio.file.Files.move(Files.java:1395)
>  at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
>  at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
>  at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415)
>  at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:1601)
>  at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:1588)
>  at 
> kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170)
>  at 
> kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>  at kafka.log.Log$$anonfun$deleteSegments$1.apply$mcI$sp(Log.scala:1170)
>  at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161)
>  at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161)
>  at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
>  at kafka.log.Log.deleteSegments(Log.scala:1161)
>  at kafka.log.Log.deleteOldSegments(Log.scala:1156)
>  at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1228)
>  at kafka.log.Log.deleteOldSegments(Log.scala:1222)
>  at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:854)
>  at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:852)
>  at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
>  at scala.collection.immutable.List.foreach(List.scala:392)
>  at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
>  at kafka.log.LogManager.cleanupLogs(LogManager.scala:852)
>  at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:385)
>  at 
> kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110)
>  at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>  at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecu

[jira] [Assigned] (KAFKA-8199) ClassCastException when trying to groupBy after suppress

2019-04-26 Thread Jose Lopez (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jose Lopez reassigned KAFKA-8199:
-

Assignee: Jose Lopez

> ClassCastException when trying to groupBy after suppress
> 
>
> Key: KAFKA-8199
> URL: https://issues.apache.org/jira/browse/KAFKA-8199
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0
>Reporter: Bill Bejeck
>Assignee: Jose Lopez
>Priority: Major
> Fix For: 2.3.0
>
>
> A topology with a groupBy after a suppress operation results in a 
> ClassCastException
>  The following sample topology
> {noformat}
> Properties properties = new Properties(); 
> properties.put(StreamsConfig.APPLICATION_ID_CONFIG, "appid"); 
> properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG,"localhost");
> StreamsBuilder builder = new StreamsBuilder();
>  builder.stream("topic")
> .groupByKey().windowedBy(TimeWindows.of(Duration.ofSeconds(30))).count() 
> .suppress(Suppressed.untilTimeLimit(Duration.ofHours(1), 
> BufferConfig.unbounded())) 
> .groupBy((k, v) -> KeyValue.pair(k,v)).count().toStream(); 
> builder.build(properties);
> {noformat}
> results in this exception:
> {noformat}
> java.lang.ClassCastException: 
> org.apache.kafka.streams.kstream.internals.KTableImpl$$Lambda$4/2084435065 
> cannot be cast to 
> org.apache.kafka.streams.kstream.internals.KTableProcessorSupplier{noformat}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (KAFKA-8270) Kafka retention hour is not working

2019-04-26 Thread Richard Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Yu reassigned KAFKA-8270:
-

Assignee: (was: Richard Yu)

> Kafka retention hour is not working
> ---
>
> Key: KAFKA-8270
> URL: https://issues.apache.org/jira/browse/KAFKA-8270
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Jiangtao Liu
>Priority: Major
>  Labels: storage
> Attachments: Screen Shot 2019-04-20 at 10.57.59 PM.png
>
>
> Kafka log cannot be deleted after the configured retention hours (12 hours 
> for log retention). 
>  
> What's our Kafka cluster look like?
> There are 6 brokers deployed with Kafka version 1.1.1.
>  
> Is it reproducible?
>  I am not sure since our Kafka cluster is working well over 1.5 years without 
> retention issue until 4/13/2019 ~ 4/20/2019.  
>  
> is it related to the active segment?
>  as I know Kafka will not delete an active segment, my case those old logs 
> are not activated, they should be inactivated. 
>  
> What's the current status?
> Those old logs have been deleted after I manually ran rolling restart Kafka 
> servers with retention hours adjustment (Ideally I tried this solution aimed 
> to{color:#33} force retention hours work, not really want to adjust the 
> retention hours, finally the solution it's working, but not immediately, I 
> remember the retention start work after couples of hours after applying the 
> change and rolling restart Kafka servers{color}.), now our Kafka storage is 
> back to normal, please check the screenshot attached with this ticket.
> A sample old log added here for better understanding of the retention hours 
> not working issue.
> ```
> // it has been there from 4/12
>  -rw-r--r-- 1 root root 136866627{color:#d04437} Apr 12{color} 04:33 
> 002581377820.log 
> // It was still being opened by Kafka when I check it with the tool lsof on 
> {color:#d04437}4/19/2019 before server rolling restart with retention hours 
> adjustment{color}.
>  java 20281 0 1678u REG 202,32 136866627 1074562295 
> /kafka/data/.../002581377820.log
>  ```
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8270) Kafka retention hour is not working

2019-04-26 Thread Richard Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827381#comment-16827381
 ] 

Richard Yu commented on KAFKA-8270:
---

Oh [~tony2011] sorry didnt even know the ticket was assigned to me. Sorry about 
that. If you wish to work on the ticket, you could do it.

> Kafka retention hour is not working
> ---
>
> Key: KAFKA-8270
> URL: https://issues.apache.org/jira/browse/KAFKA-8270
> Project: Kafka
>  Issue Type: Bug
>  Components: log
>Reporter: Jiangtao Liu
>Assignee: Richard Yu
>Priority: Major
>  Labels: storage
> Attachments: Screen Shot 2019-04-20 at 10.57.59 PM.png
>
>
> Kafka log cannot be deleted after the configured retention hours (12 hours 
> for log retention). 
>  
> What's our Kafka cluster look like?
> There are 6 brokers deployed with Kafka version 1.1.1.
>  
> Is it reproducible?
>  I am not sure since our Kafka cluster is working well over 1.5 years without 
> retention issue until 4/13/2019 ~ 4/20/2019.  
>  
> is it related to the active segment?
>  as I know Kafka will not delete an active segment, my case those old logs 
> are not activated, they should be inactivated. 
>  
> What's the current status?
> Those old logs have been deleted after I manually ran rolling restart Kafka 
> servers with retention hours adjustment (Ideally I tried this solution aimed 
> to{color:#33} force retention hours work, not really want to adjust the 
> retention hours, finally the solution it's working, but not immediately, I 
> remember the retention start work after couples of hours after applying the 
> change and rolling restart Kafka servers{color}.), now our Kafka storage is 
> back to normal, please check the screenshot attached with this ticket.
> A sample old log added here for better understanding of the retention hours 
> not working issue.
> ```
> // it has been there from 4/12
>  -rw-r--r-- 1 root root 136866627{color:#d04437} Apr 12{color} 04:33 
> 002581377820.log 
> // It was still being opened by Kafka when I check it with the tool lsof on 
> {color:#d04437}4/19/2019 before server rolling restart with retention hours 
> adjustment{color}.
>  java 20281 0 1678u REG 202,32 136866627 1074562295 
> /kafka/data/.../002581377820.log
>  ```
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8299) Add type-safe instantiation of generic classes to AbstractConfig

2019-04-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827367#comment-16827367
 ] 

ASF GitHub Bot commented on KAFKA-8299:
---

C0urante commented on pull request #6644: KAFKA-8299: Add generic-safe 
getConfiguredInstance() methods to AbstractConfig
URL: https://github.com/apache/kafka/pull/6644
 
 
   [Jira](https://issues.apache.org/jira/browse/KAFKA-8299)
   
   The changes here add four new methods to the `AbstractConfig` class that 
mirror the existing `getConfiguredInstance(...)` and 
`getConfiguredInstances(...)` methods; the only difference is that any 
parameter of type `Class` is changed to `TypeLiteral` and that type 
checking is done with `TypeUtils.isInstance(...)` instead of 
`Class.isInstance(...)`
   
   A new unit test (`AbstractConfigTest.testGenericClassInstantiation()`) 
confirms this behavior, including the lack of unchecked cast warnings and the 
throwing of a runtime exception when an attempt is made to instantiate a class 
that extends from/implements the proper raw type but with improper type 
parameters.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add type-safe instantiation of generic classes to AbstractConfig
> 
>
> Key: KAFKA-8299
> URL: https://issues.apache.org/jira/browse/KAFKA-8299
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Reporter: Chris Egerton
>Assignee: Chris Egerton
>Priority: Minor
>
> {{AbstractConfig.getConfiguredInstance(String key, Class klass)}} and 
> other similar methods isn't type-safe for generic types. For example, the 
> following code compiles but generates a runtime exception when the created 
> {{Consumer}} is invoked:
>  
> {code:java}
> public class KafkaIssueSnippet {
> public static class PrintInt implements Consumer {
>     @Override
> public void accept(Integer i) {
> System.out.println(i);
> }
> }
> public static void main(String[] args) {
> final String stringConsumerProp = "string.consumer.class";
> AbstractConfig config = new AbstractConfig(
> new ConfigDef().define(
> stringConsumerProp,
> ConfigDef.Type.CLASS,
> ConfigDef.Importance.HIGH,
> "A class that implements Consumer"
> ),
> Collections.singletonMap(
> stringConsumerProp,
> PrintInt.class.getName()
> )
> );
> Consumer stringConsumer = config.getConfiguredInstance(
> stringConsumerProp,
> Consumer.class
> );
> stringConsumer.accept("Oops! ClassCastException");
> }
> }{code}
> The compiler (rightfully so) generates a warning about the unchecked cast 
> from {{Consumer}} to {{Consumer}} to indicate that exactly this sort 
> of thing may happen, but it would be nice if we didn't have to worry about 
> this in the first place and instead had the same guarantees for generic types 
> that we do for non-generic types: that either the 
> {{getConfiguredInstance(...)}} method returns an object to us that we know 
> for sure is an instance of the requested type, or an exception is thrown.
> Apache Commons contains a useful reflection library that could possibly be 
> used to bridge this gap; specifically, its 
> [TypeUtils|https://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/reflect/TypeUtils.html]
>  and 
> [TypeLiteral|https://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/reflect/TypeLiteral.html]
>  classes could be used to add new {{getConfiguredInstance}} and 
> {{getConfiguredInstances}} methods to the {{AbstractConfig}} class that 
> accept instances of {{TypeLiteral}} instead of {{Class}} and then perform 
> type checking to ensure that the requested class actually implements/extends 
> from the requested type.
> Since this affects public API it's possible a KIP will be required, but the 
> changes are pretty lightweight (four new methods that heavily resemble 
> existing ones). If a contributor or committer, especially one familiar with 
> this section of the codebase, has an opinion on the necessity of a KIP their 
> input would be appreciated.
>  



--
This message was sent by At

[jira] [Created] (KAFKA-8300) kafka broker did not recovery from quota limit after quota setting is removed

2019-04-26 Thread Yu Yang (JIRA)
Yu Yang created KAFKA-8300:
--

 Summary: kafka broker did not recovery from quota limit after 
quota setting is removed
 Key: KAFKA-8300
 URL: https://issues.apache.org/jira/browse/KAFKA-8300
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.1.0
 Environment: Description:  Ubuntu 14.04.5 LTS
Release:14.04
Reporter: Yu Yang
 Attachments: Screen Shot 2019-04-26 at 4.02.03 PM.png

We applied quota management to one of our clusters. After applying quota, we 
saw the following errors in kafka server log. And the broker's network traffic 
did not recover, even after we removed the quota settings. Any insights on 
this? 

{code}
3097 [2019-04-26 20:59:42,359] WARN Attempting to send response via channel for 
which there is no open connection, connection id 
10.1.239.72:9093-10.3.57.190:59846-4925637 (kafka.network.Proces sor)
3098 [2019-04-26 20:59:43,518] WARN Attempting to send response via channel for
which there is no open connection, connection id 
10.1.239.72:9093-10.3.230.92:49788-4925646 (kafka.network.Proces sor)
3099 [2019-04-26 20:59:44,343] WARN Attempting to send response via channel for 
which there is no open connection, connection id 
10.1.239.72:9093-10.3.32.233:35714-4925663 (kafka.network.Proces sor)
3100 [2019-04-26 20:59:45,448] WARN Attempting to send response via channel for 
which there is no open connection, connection id 
10.1.239.72:9093-10.3.55.250:52884-4925658 (kafka.network.Proces sor)
3101 [2019-04-26 20:59:45,544] WARN Attempting to send response via channel for 
which there is no open connection, connection id 
10.1.239.72:9093-10.3.55.24:41608-4925687 (kafka.network.Process or)
{code}
 

!Screen Shot 2019-04-26 at 4.02.03 PM.png|width=640px!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8300) kafka broker did not recover from quota limit after quota setting is removed

2019-04-26 Thread Yu Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Yang updated KAFKA-8300:
---
Summary: kafka broker did not recover from quota limit after quota setting 
is removed  (was: kafka broker did not recovery from quota limit after quota 
setting is removed)

> kafka broker did not recover from quota limit after quota setting is removed
> 
>
> Key: KAFKA-8300
> URL: https://issues.apache.org/jira/browse/KAFKA-8300
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.1.0
> Environment: Description: Ubuntu 14.04.5 LTS
> Release:  14.04
>Reporter: Yu Yang
>Priority: Major
> Attachments: Screen Shot 2019-04-26 at 4.02.03 PM.png
>
>
> We applied quota management to one of our clusters. After applying quota, we 
> saw the following errors in kafka server log. And the broker's network 
> traffic did not recover, even after we removed the quota settings. Any 
> insights on this? 
> {code}
> 3097 [2019-04-26 20:59:42,359] WARN Attempting to send response via channel 
> for 
> which there is no open connection, connection id 
> 10.1.239.72:9093-10.3.57.190:59846-4925637 (kafka.network.Proces sor)
> 3098 [2019-04-26 20:59:43,518] WARN Attempting to send response via channel 
> for
> which there is no open connection, connection id 
> 10.1.239.72:9093-10.3.230.92:49788-4925646 (kafka.network.Proces sor)
> 3099 [2019-04-26 20:59:44,343] WARN Attempting to send response via channel 
> for 
> which there is no open connection, connection id 
> 10.1.239.72:9093-10.3.32.233:35714-4925663 (kafka.network.Proces sor)
> 3100 [2019-04-26 20:59:45,448] WARN Attempting to send response via channel 
> for 
> which there is no open connection, connection id 
> 10.1.239.72:9093-10.3.55.250:52884-4925658 (kafka.network.Proces sor)
> 3101 [2019-04-26 20:59:45,544] WARN Attempting to send response via channel 
> for 
> which there is no open connection, connection id 
> 10.1.239.72:9093-10.3.55.24:41608-4925687 (kafka.network.Process or)
> {code}
>  
> !Screen Shot 2019-04-26 at 4.02.03 PM.png|width=640px!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8299) Add type-safe instantiation of generic classes to AbstractConfig

2019-04-26 Thread Chris Egerton (JIRA)
Chris Egerton created KAFKA-8299:


 Summary: Add type-safe instantiation of generic classes to 
AbstractConfig
 Key: KAFKA-8299
 URL: https://issues.apache.org/jira/browse/KAFKA-8299
 Project: Kafka
  Issue Type: Improvement
  Components: config
Reporter: Chris Egerton
Assignee: Chris Egerton


{{AbstractConfig.getConfiguredInstance(String key, Class klass)}} and other 
similar methods isn't type-safe for generic types. For example, the following 
code compiles but generates a runtime exception when the created {{Consumer}} 
is invoked:

 
{code:java}
public class KafkaIssueSnippet {
public static class PrintInt implements Consumer {
    @Override
public void accept(Integer i) {
System.out.println(i);
}
}

public static void main(String[] args) {
final String stringConsumerProp = "string.consumer.class";

AbstractConfig config = new AbstractConfig(
new ConfigDef().define(
stringConsumerProp,
ConfigDef.Type.CLASS,
ConfigDef.Importance.HIGH,
"A class that implements Consumer"
),
Collections.singletonMap(
stringConsumerProp,
PrintInt.class.getName()
)
);

Consumer stringConsumer = config.getConfiguredInstance(
stringConsumerProp,
Consumer.class
);

stringConsumer.accept("Oops! ClassCastException");
}
}{code}
The compiler (rightfully so) generates a warning about the unchecked cast from 
{{Consumer}} to {{Consumer}} to indicate that exactly this sort of 
thing may happen, but it would be nice if we didn't have to worry about this in 
the first place and instead had the same guarantees for generic types that we 
do for non-generic types: that either the {{getConfiguredInstance(...)}} method 
returns an object to us that we know for sure is an instance of the requested 
type, or an exception is thrown.

Apache Commons contains a useful reflection library that could possibly be used 
to bridge this gap; specifically, its 
[TypeUtils|https://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/reflect/TypeUtils.html]
 and 
[TypeLiteral|https://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/reflect/TypeLiteral.html]
 classes could be used to add new {{getConfiguredInstance}} and 
{{getConfiguredInstances}} methods to the {{AbstractConfig}} class that accept 
instances of {{TypeLiteral}} instead of {{Class}} and then perform type 
checking to ensure that the requested class actually implements/extends from 
the requested type.

Since this affects public API it's possible a KIP will be required, but the 
changes are pretty lightweight (four new methods that heavily resemble existing 
ones). If a contributor or committer, especially one familiar with this section 
of the codebase, has an opinion on the necessity of a KIP their input would be 
appreciated.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8254) Suppress incorrectly passes a null topic to the serdes

2019-04-26 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-8254:
-
Fix Version/s: 2.1.2

> Suppress incorrectly passes a null topic to the serdes
> --
>
> Key: KAFKA-8254
> URL: https://issues.apache.org/jira/browse/KAFKA-8254
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> For example, in KTableSuppressProcessor, we do:
> {noformat}
> final Bytes serializedKey = Bytes.wrap(keySerde.serializer().serialize(null, 
> key));
> {noformat}
> This violates the contract of Serializer (and Deserializer), and breaks 
> integration with known Serde implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8254) Suppress incorrectly passes a null topic to the serdes

2019-04-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827345#comment-16827345
 ] 

ASF GitHub Bot commented on KAFKA-8254:
---

guozhangwang commented on pull request #6641: KAFKA-8254: Pass Changelog as 
Topic in Suppress Serdes (#6602)
URL: https://github.com/apache/kafka/pull/6641
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Suppress incorrectly passes a null topic to the serdes
> --
>
> Key: KAFKA-8254
> URL: https://issues.apache.org/jira/browse/KAFKA-8254
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.3.0, 2.2.1
>
>
> For example, in KTableSuppressProcessor, we do:
> {noformat}
> final Bytes serializedKey = Bytes.wrap(keySerde.serializer().serialize(null, 
> key));
> {noformat}
> This violates the contract of Serializer (and Deserializer), and breaks 
> integration with known Serde implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (KAFKA-8298) ConcurrentModificationException Possible when optimizing for repartition nodes

2019-04-26 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-8298:
-
Description: 
As indicated in the title.

When processing multiple key-changing operations during the optimization phase 
a ConcurrentModificationException is possible. 

  was:As indicated in the title


> ConcurrentModificationException Possible when optimizing for repartition nodes
> --
>
> Key: KAFKA-8298
> URL: https://issues.apache.org/jira/browse/KAFKA-8298
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.3.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> As indicated in the title.
> When processing multiple key-changing operations during the optimization 
> phase a ConcurrentModificationException is possible. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8298) ConcurrentModificationException Possible when optimizing for repartition nodes

2019-04-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827322#comment-16827322
 ] 

ASF GitHub Bot commented on KAFKA-8298:
---

bbejeck commented on pull request #6643: KAFKA-8298: Fix possible concurrent 
modification exception
URL: https://github.com/apache/kafka/pull/6643
 
 
   When processing multiple key-changing operations during the optimization 
phase a `ConcurrentModificationException` is possible.  This PR fixes that 
situation.
   
   I've added a test that fails without this fix.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> ConcurrentModificationException Possible when optimizing for repartition nodes
> --
>
> Key: KAFKA-8298
> URL: https://issues.apache.org/jira/browse/KAFKA-8298
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.3.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> As indicated in the title



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8298) ConcurrentModificationException Possible when optimizing for repartition nodes

2019-04-26 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-8298:
--

 Summary: ConcurrentModificationException Possible when optimizing 
for repartition nodes
 Key: KAFKA-8298
 URL: https://issues.apache.org/jira/browse/KAFKA-8298
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.2.0, 2.1.0, 2.3.0
Reporter: Bill Bejeck
Assignee: Bill Bejeck
 Fix For: 2.3.0, 2.1.2, 2.2.1


As indicated in the title



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8029) Add in-memory bytes-only session store implementation

2019-04-26 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8029.
--
   Resolution: Fixed
Fix Version/s: 2.3.0

> Add in-memory bytes-only session store implementation
> -
>
> Key: KAFKA-8029
> URL: https://issues.apache.org/jira/browse/KAFKA-8029
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Sophie Blee-Goldman
>Priority: Major
>  Labels: needs-kip
> Fix For: 2.3.0
>
>
> As titled. We've added the window store and session store implementations in 
> memory, what's left is the session store now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8029) Add in-memory bytes-only session store implementation

2019-04-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827295#comment-16827295
 ] 

ASF GitHub Bot commented on KAFKA-8029:
---

guozhangwang commented on pull request #6525: KAFKA-8029: In memory session 
store
URL: https://github.com/apache/kafka/pull/6525
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Add in-memory bytes-only session store implementation
> -
>
> Key: KAFKA-8029
> URL: https://issues.apache.org/jira/browse/KAFKA-8029
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Sophie Blee-Goldman
>Priority: Major
>  Labels: needs-kip
>
> As titled. We've added the window store and session store implementations in 
> memory, what's left is the session store now.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7862) Modify JoinGroup logic to incorporate group.instance.id change

2019-04-26 Thread Boyang Chen (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Boyang Chen resolved KAFKA-7862.

Resolution: Fixed

> Modify JoinGroup logic to incorporate group.instance.id change
> --
>
> Key: KAFKA-7862
> URL: https://issues.apache.org/jira/browse/KAFKA-7862
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> The step one for KIP-345 join group logic change to corporate with static 
> membership.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8297) Kafka Streams ConsumerRecordFactory yields difficult compiler error about generics

2019-04-26 Thread Michael Drogalis (JIRA)
Michael Drogalis created KAFKA-8297:
---

 Summary: Kafka Streams ConsumerRecordFactory yields difficult 
compiler error about generics
 Key: KAFKA-8297
 URL: https://issues.apache.org/jira/browse/KAFKA-8297
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Michael Drogalis


When using the ConsumerRecordFactory, it's convenient to specify a default 
topic to create records with:

{code:java}
ConsumerRecordFactory inputFactory = new 
ConsumerRecordFactory<>(inputTopic, keySerializer, valueSerializer);
{code}

However, when the factory is used to create a record with a String key:

{code:java}
inputFactory.create("any string", user)
{code}

Compilation fails with the following warning:

{code:java}
Ambiguous method call. Both:

create(String, User) in ConsumerRecordFactory and
create(String, User) in ConsumerRecordFactory match
{code}

At first glance, this is a really confusing error to see during compilation. 
What's happening is that there are two clashing signatures for `create`: 
create(K, V) and create(String, V). The latter signature represents a topic 
name.

It seems like fixing this would require breaking the existing interface. This 
is a really opaque problem to hit though, and it would be great if we could 
avoid having users encounter this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8296) Kafka Streams branch method raises type warnings

2019-04-26 Thread Michael Drogalis (JIRA)
Michael Drogalis created KAFKA-8296:
---

 Summary: Kafka Streams branch method raises type warnings
 Key: KAFKA-8296
 URL: https://issues.apache.org/jira/browse/KAFKA-8296
 Project: Kafka
  Issue Type: Bug
  Components: streams
Reporter: Michael Drogalis


Because the branch method in the DSL takes vargargs, using it as follows raises 
an unchecked type warning:

{code:java}
KStream[] branches = builder.stream(inputTopic)
.branch((key, user) -> "united 
states".equals(user.getCountry()),
(key, user) -> "germany".equals(user.getCountry()),
(key, user) -> "mexico".equals(user.getCountry()),
(key, user) -> true);
{code}

The compiler warns with:

{code:java}
Warning:(39, 24) java: unchecked generic array creation for varargs parameter 
of type org.apache.kafka.streams.kstream.Predicate[]
{code}

This is unfortunate because of the way Java's type system + generics work. We 
could possibly fix this by adding the @SafeVarargs annotation to the branch 
method signatures.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8295) Optimize count() using RocksDB merge operator

2019-04-26 Thread Sophie Blee-Goldman (JIRA)
Sophie Blee-Goldman created KAFKA-8295:
--

 Summary: Optimize count() using RocksDB merge operator
 Key: KAFKA-8295
 URL: https://issues.apache.org/jira/browse/KAFKA-8295
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Sophie Blee-Goldman


In addition to regular put/get/delete RocksDB provides a fourth operation, 
merge. This essentially provides an optimized read/update/write path in a 
single operation. One of the built-in (C++) merge operators exposed over the 
Java API is a counter. We should be able to leverage this for a more efficient 
implementation of count()

 

(Note: Unfortunately it seems unlikely we can use this to optimize general 
aggregations, even if RocksJava allowed for a custom merge operator, unless we 
provide a way for the user to specify and connect a C++ implemented aggregator 
– otherwise we incur too much cost crossing the jni for a net performance 
benefit)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7862) Modify JoinGroup logic to incorporate group.instance.id change

2019-04-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827209#comment-16827209
 ] 

ASF GitHub Bot commented on KAFKA-7862:
---

guozhangwang commented on pull request #6177: KAFKA-7862 & KIP-345 part-one: 
Add static membership logic to JoinGroup protocol
URL: https://github.com/apache/kafka/pull/6177
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Modify JoinGroup logic to incorporate group.instance.id change
> --
>
> Key: KAFKA-7862
> URL: https://issues.apache.org/jira/browse/KAFKA-7862
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
>
> The step one for KIP-345 join group logic change to corporate with static 
> membership.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8294) Batch StopReplica requests with partition deletion and add test cases

2019-04-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827164#comment-16827164
 ] 

ASF GitHub Bot commented on KAFKA-8294:
---

hachikuji commented on pull request #6642: KAFKA-8294; Batch StopReplica 
requests when possible and improve test coverage
URL: https://github.com/apache/kafka/pull/6642
 
 
   The main problem we are trying to solve here is the batching of StopReplica 
requests and the lack of test coverage for `ControllerChannelManager`. 
Addressing the first problem was straightforward, but the second problem 
required quite a bit of work because of the dependence on `KafkaController` for 
all of the events. It seemed to make sense to separate the events from the 
processing of events so that we could remove this dependence and improve 
testability. With the refactoring, I was able to add test cases covering most 
of the logic in `ControllerChannelManager` including the generation of requests 
and the expected response handling logic. Note that I have not actually changed 
any of the event handling logic in `KafkaController`.
   
   While refactoring this logic, I found that the event queue time metric was 
not being correctly computed. The problem is that many of the controller events 
were singleton objects which inherited the `enqueueTimeMs` field from the 
`ControllerEvent` trait. This would never get updated, so queue time would be 
skewed.
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Batch StopReplica requests with partition deletion and add test cases
> -
>
> Key: KAFKA-8294
> URL: https://issues.apache.org/jira/browse/KAFKA-8294
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> One of the tricky aspects we found in KAFKA-8237 is the batching of the 
> StopReplica requests. We should have test cases covering expected behavior so 
> that we do not introduce regressions and we should make the batching 
> consistent whether or not `deletePartitions` is set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8294) Batch StopReplica requests with partition deletion and add test cases

2019-04-26 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-8294:
--

 Summary: Batch StopReplica requests with partition deletion and 
add test cases
 Key: KAFKA-8294
 URL: https://issues.apache.org/jira/browse/KAFKA-8294
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson


One of the tricky aspects we found in KAFKA-8237 is the batching of the 
StopReplica requests. We should have test cases covering expected behavior so 
that we do not introduce regressions and we should make the batching consistent 
whether or not `deletePartitions` is set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7631) NullPointerException when SCRAM is allowed bu ScramLoginModule is not in broker's jaas.conf

2019-04-26 Thread koert kuipers (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827153#comment-16827153
 ] 

koert kuipers commented on KAFKA-7631:
--

i fixed this by adding both Krb5LoginModule and ScramLoginModule to broker 
jaas.conf

> NullPointerException when SCRAM is allowed bu ScramLoginModule is not in 
> broker's jaas.conf
> ---
>
> Key: KAFKA-7631
> URL: https://issues.apache.org/jira/browse/KAFKA-7631
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.0
>Reporter: Andras Beni
>Assignee: Viktor Somogyi-Vass
>Priority: Minor
>
> When user wants to use delegation tokens and lists {{SCRAM}} in 
> {{sasl.enabled.mechanisms}}, but does not add {{ScramLoginModule}} to 
> broker's JAAS configuration, a null pointer exception is thrown on broker 
> side and the connection is closed.
> Meaningful error message should be logged and sent back to the client.
> {code}
> java.lang.NullPointerException
> at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:376)
> at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:262)
> at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:127)
> at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:489)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:427)
> at kafka.network.Processor.poll(SocketServer.scala:679)
> at kafka.network.Processor.run(SocketServer.scala:584)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-04-26 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827123#comment-16827123
 ] 

Ted Yu edited comment on KAFKA-5998 at 4/26/19 5:31 PM:


https://pastebin.com/vyvQ8pkF shows what I mentioned earlier.

In StateDirectory#cleanRemovedTasks, we use both last modification time of the 
directory and whether the task has been committed as criteria. When both are 
satisfied, we clean the directory for the task.


was (Author: yuzhih...@gmail.com):
https://pastebin.com/Y247UZgb shows what I mentioned earlier (incomplete).

In StateDirectory#cleanRemovedTasks, we use both last modification time of the 
directory and whether the task has been committed as criteria. When both are 
satisfied, we clean the directory for the task.

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Topology.txt, exc.txt, 
> props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileO

[jira] [Comment Edited] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-04-26 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827123#comment-16827123
 ] 

Ted Yu edited comment on KAFKA-5998 at 4/26/19 5:12 PM:


https://pastebin.com/Y247UZgb shows what I mentioned earlier (incomplete).

In StateDirectory#cleanRemovedTasks, we use both last modification time of the 
directory and whether the task has been committed as criteria. When both are 
satisfied, we clean the directory for the task.


was (Author: yuzhih...@gmail.com):
https://pastebin.com/Y247UZgb shows what I mentioned earlier.
In StateDirectory#cleanRemovedTasks, we don't rely on last modification time - 
we check whether the task has been committed.

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Topology.txt, exc.txt, 
> props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(F

[jira] [Commented] (KAFKA-7847) KIP-421: Automatically resolve external configurations.

2019-04-26 Thread TEJAL ADSUL (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827128#comment-16827128
 ] 

TEJAL ADSUL commented on KAFKA-7847:


[~rhauch] [~cmccabe], [~rsivaram] Please could you'll review the PR 
[https://github.com/apache/kafka/pull/6467]

> KIP-421: Automatically resolve external configurations.
> ---
>
> Key: KAFKA-7847
> URL: https://issues.apache.org/jira/browse/KAFKA-7847
> Project: Kafka
>  Issue Type: Improvement
>  Components: config
>Reporter: TEJAL ADSUL
>Priority: Minor
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> This proposal intends to enhance the AbstractConfig base class to support 
> replacing variables in configurations just prior to parsing and validation. 
> This simple change will make it very easy for client applications, Kafka 
> Connect, and Kafka Streams to use shared code to easily incorporate 
> externalized secrets and other variable replacements within their 
> configurations. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-04-26 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827123#comment-16827123
 ] 

Ted Yu commented on KAFKA-5998:
---

https://pastebin.com/Y247UZgb shows what I mentioned earlier.
In StateDirectory#cleanRemovedTasks, we don't rely on last modification time - 
we check whether the task has been committed.

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Topology.txt, exc.txt, 
> props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-

[jira] [Commented] (KAFKA-8254) Suppress incorrectly passes a null topic to the serdes

2019-04-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827118#comment-16827118
 ] 

ASF GitHub Bot commented on KAFKA-8254:
---

vvcephei commented on pull request #6641: KAFKA-8254: Pass Changelog as Topic 
in Suppress Serdes (#6602)
URL: https://github.com/apache/kafka/pull/6641
 
 
   Cherry-picked from #6602 
   
   Reviewers:  Matthias J. Sax , Guozhang Wang 

   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Suppress incorrectly passes a null topic to the serdes
> --
>
> Key: KAFKA-8254
> URL: https://issues.apache.org/jira/browse/KAFKA-8254
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.3.0, 2.2.1
>
>
> For example, in KTableSuppressProcessor, we do:
> {noformat}
> final Bytes serializedKey = Bytes.wrap(keySerde.serializer().serialize(null, 
> key));
> {noformat}
> This violates the contract of Serializer (and Deserializer), and breaks 
> integration with known Serde implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8254) Suppress incorrectly passes a null topic to the serdes

2019-04-26 Thread Guozhang Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-8254.
--
   Resolution: Fixed
Fix Version/s: (was: 2.1.2)

Resolved for trunk / 2.2 now, will continue to fix 2.1 soon.

> Suppress incorrectly passes a null topic to the serdes
> --
>
> Key: KAFKA-8254
> URL: https://issues.apache.org/jira/browse/KAFKA-8254
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.3.0, 2.2.1
>
>
> For example, in KTableSuppressProcessor, we do:
> {noformat}
> final Bytes serializedKey = Bytes.wrap(keySerde.serializer().serialize(null, 
> key));
> {noformat}
> This violates the contract of Serializer (and Deserializer), and breaks 
> integration with known Serde implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8254) Suppress incorrectly passes a null topic to the serdes

2019-04-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827106#comment-16827106
 ] 

ASF GitHub Bot commented on KAFKA-8254:
---

guozhangwang commented on pull request #6602: KAFKA-8254: Pass Changelog as 
Topic in Suppress Serdes
URL: https://github.com/apache/kafka/pull/6602
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Suppress incorrectly passes a null topic to the serdes
> --
>
> Key: KAFKA-8254
> URL: https://issues.apache.org/jira/browse/KAFKA-8254
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
>Reporter: John Roesler
>Assignee: John Roesler
>Priority: Major
> Fix For: 2.3.0, 2.1.2, 2.2.1
>
>
> For example, in KTableSuppressProcessor, we do:
> {noformat}
> final Bytes serializedKey = Bytes.wrap(keySerde.serializer().serialize(null, 
> key));
> {noformat}
> This violates the contract of Serializer (and Deserializer), and breaks 
> integration with known Serde implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-04-26 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827089#comment-16827089
 ] 

Ted Yu commented on KAFKA-5998:
---

[~mparthas]:
>From your comment on Apr 8th, it seems that making the value for 
>"state.cleanup.delay.ms" longer didn't avoid .checkpoint.tmp disappearing.

Did you see log similar to the following prior to the error ?
{code}
April 25th 2019, 21:03:49.332 2019-04-25 21:03:49,332 INFO 
[org.apache.kafka.streams.processor.internals.StateDirectory] 
(application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-CleanupThread) - 
[short-component-name:; transaction-id:; user-id:; creation-time:] 
stream-thread [application-fde85da6-9d2f-4457-8bdb-ea1c78c8c1e2-CleanupThread] 
Deleting obsolete state directory 1_1 for task 1_1 as 813332ms has elapsed 
(cleanup delay is 60ms).
{code}
If so, that might indicate the new value for "state.cleanup.delay.ms" was still 
short.
If not, there could be other reason for the problem.

In case the log has been swapped out, please keep an eye for future occurrences.

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Topology.txt, exc.txt, 
> props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafk

[jira] [Commented] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-04-26 Thread Jingguo Yao (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827025#comment-16827025
 ] 

Jingguo Yao commented on KAFKA-5998:


We are using Confluent 5.1.0 which is based on kafka_2.11-2.1.1.

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Topology.txt, exc.txt, 
> props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.

[jira] [Created] (KAFKA-8293) Messages undelivered when small quotas applied

2019-04-26 Thread Kirill Kulikov (JIRA)
Kirill Kulikov created KAFKA-8293:
-

 Summary: Messages undelivered when small quotas applied 
 Key: KAFKA-8293
 URL: https://issues.apache.org/jira/browse/KAFKA-8293
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 2.1.1
Reporter: Kirill Kulikov


I observe a strange Kafka behavior when using small quotas.

For ex. I set quotas for the consumer like

 
{code:java}
kafka-configs --zookeeper zookeeper:2181 --entity-type users --entity-name 
kafka --alter --add-config 'producer_byte_rate=2048000, 
consumer_byte_rate=256'{code}
 

If I send a small batch of messages as

 
{code:java}
kafka-producer-perf-test --producer.config /etc/kafka/consumer.properties 
--producer-props acks=-1 compression.type=none bootstrap.servers=kafka:9093 
--num-records 10 --record-size 20 --throughput 1000 --print-metrics --topic 
test 
{code}
they go through without problems.

But when the batch is bigger

 
{code:java}
kafka-producer-perf-test --producer.config /etc/kafka/consumer.properties 
--producer-props acks=-1 compression.type=none bootstrap.servers=kafka:9093 
--num-records 100 --record-size 20 --throughput 1000 --print-metrics --topic 
test
{code}
... I do not get any messages on the consumer side *at all*.

On the other hand, if `kafka-producer-perf-test` throughput is limited like

 
{code:java}
kafka-producer-perf-test --producer.config /etc/kafka/consumer.properties 
--producer-props acks=-1 compression.type=none bootstrap.servers=kafka:9093 
--num-records 1000 --record-size 10 --throughput 10 --print-metrics --topic test
{code}
I can see only the first 20-30 messages in `kafka-console-consumer`. But then 
it gets stuck (throttled perhaps) and other queued messages never come through.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7631) NullPointerException when SCRAM is allowed bu ScramLoginModule is not in broker's jaas.conf

2019-04-26 Thread koert kuipers (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827023#comment-16827023
 ] 

koert kuipers commented on KAFKA-7631:
--

it is not clear to me how i should handle this situation. this Jira seems to 
focus on a better error message, presumably pointing to the fact that the 
broker jaas configuration file does not have ScramLoginModule. but that is 
correct for me, i want to use kerberos for broker authentication.

should it work for me without ScramLoginModule in broker jaas.conf?
do i need to add both Krb5LoginModule and ScramLoginModule to broker jaas.conf?

> NullPointerException when SCRAM is allowed bu ScramLoginModule is not in 
> broker's jaas.conf
> ---
>
> Key: KAFKA-7631
> URL: https://issues.apache.org/jira/browse/KAFKA-7631
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.0
>Reporter: Andras Beni
>Assignee: Viktor Somogyi-Vass
>Priority: Minor
>
> When user wants to use delegation tokens and lists {{SCRAM}} in 
> {{sasl.enabled.mechanisms}}, but does not add {{ScramLoginModule}} to 
> broker's JAAS configuration, a null pointer exception is thrown on broker 
> side and the connection is closed.
> Meaningful error message should be logged and sent back to the client.
> {code}
> java.lang.NullPointerException
> at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.handleSaslToken(SaslServerAuthenticator.java:376)
> at 
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.authenticate(SaslServerAuthenticator.java:262)
> at 
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:127)
> at 
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:489)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:427)
> at kafka.network.Processor.poll(SocketServer.scala:679)
> at kafka.network.Processor.run(SocketServer.scala:584)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (KAFKA-8282) Missing JMX bandwidth quota metrics for Produce and Fetch

2019-04-26 Thread JMVM (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826951#comment-16826951
 ] 

JMVM edited comment on KAFKA-8282 at 4/26/19 2:23 PM:
--

Hi [~wushujames]

We already broker quota configuration in place:

{code:java}
quota.consumer.default=9223372036854775807
quota.producer.default=9223372036854775807
quota.window.num=11
quota.window.size.seconds=1{code}

But I found they are deprecated, and that's why quotas were unlimited and not 
facing JMX metrics.

I did alter config via kafka-configs.sh setting producer_byte_rate and 
consumer_byte_rate for users and clients, and now facing JMX metrics as 
expected.

Thanks




was (Author: jmvm):
Hi [~wushujames]

We already broker quota configuration in place:

{code:java}
quota.consumer.default=9223372036854775807
quota.producer.default=9223372036854775807
quota.window.num=11
quota.window.size.seconds=1{code}

But I think they are deprecated, and that's why quotas were unlimited and not 
facing JMX metrics.

I did alter config via kafka-configs.sh setting producer_byte_rate and 
consumer_byte_rate for users and clients, and now facing JMX metrics as 
expected.

Thanks



> Missing JMX bandwidth quota metrics for Produce and Fetch
> -
>
> Key: KAFKA-8282
> URL: https://issues.apache.org/jira/browse/KAFKA-8282
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: JMVM
>Priority: Major
> Attachments: Screen Shot 2019-04-23 at 20.59.21.png
>
>
> Recently I performed several *rolling upgrades following official steps* for 
> our Kafka brokers *from 0.11.0.1 to newer versions in different 
> environments*, and apparently working fine in all cases from functional point 
> of view: *producers and consumers working as expected*. 
> Specifically, I upgraded:
>  # From *0.11.0.1 to 1.0.0*, and then from *1.0.0 to 2.0.0*, and then *to* 
> *2.1.1*
>  # *From 0.11.0.1 directly to 2.1.1*
> However, in all cases *JMX bandwidth quota metrics for Fetch and Produce* 
> which used to show *all producers and consumers working with brokers* are 
> gone, just showing queue-size, in our *JMX monitoring clients specifically 
> Introscope Wily* *keeping same configuration* (see attached image).
> In fact, I removed Wily filter configuration for JMX in *order to show all 
> possible metrics, and keeping both Fetch and Produce still gone*.
> Note I checked if having proper version after rolling upgrade, for example, 
> for *2.1.1*, and being as expected:
> *ll /opt/kafka/libs/*
> *total 54032*
> *-rw-r--r-- 1 kafka kafka    69409 Jan  4 08:42 activation-1.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    14768 Jan  4 08:42 
> aopalliance-repackaged-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    90347 Jan  4 08:42 argparse4j-0.7.0.jar*
> *-rw-r--r-- 1 kafka kafka    20437 Jan  4 08:40 
> audience-annotations-0.5.0.jar*
> *-rw-r--r-- 1 kafka kafka   501879 Jan  4 08:43 commons-lang3-3.8.1.jar*
> *-rw-r--r-- 1 kafka kafka    96801 Feb  8 18:32 connect-api-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    18265 Feb  8 18:32 
> connect-basic-auth-extension-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    20509 Feb  8 18:32 connect-file-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    45489 Feb  8 18:32 connect-json-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka   466588 Feb  8 18:32 connect-runtime-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    90358 Feb  8 18:32 connect-transforms-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka  2442625 Jan  4 08:43 guava-20.0.jar*
> *-rw-r--r-- 1 kafka kafka   186763 Jan  4 08:42 hk2-api-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka   189454 Jan  4 08:42 hk2-locator-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka   135317 Jan  4 08:42 hk2-utils-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    66894 Jan 11 21:28 jackson-annotations-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka   325619 Jan 11 21:27 jackson-core-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka  1347236 Jan 11 21:27 jackson-databind-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    32373 Jan 11 21:28 jackson-jaxrs-base-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    15861 Jan 11 21:28 
> jackson-jaxrs-json-provider-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    32627 Jan 11 21:28 
> jackson-module-jaxb-annotations-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka   737884 Jan  4 08:43 javassist-3.22.0-CR2.jar*
> *-rw-r--r-- 1 kafka kafka    26366 Jan  4 08:42 javax.annotation-api-1.2.jar*
> *-rw-r--r-- 1 kafka kafka 2497 Jan  4 08:42 javax.inject-1.jar*
> *-rw-r--r-- 1 kafka kafka 5951 Jan  4 08:42 javax.inject-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    95806 Jan  4 08:42 javax.servlet-api-3.1.0.jar*
> *-rw-r--r-- 1 kafka kafka   126898 Jan  4 08:42 javax.ws.rs-api-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka   127509 Jan  4 08:42 javax.ws.rs-api-2.1.jar*
> *-rw-r--r-- 1 kafka kafka   125632 Jan  4 

[jira] [Comment Edited] (KAFKA-8282) Missing JMX bandwidth quota metrics for Produce and Fetch

2019-04-26 Thread JMVM (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826951#comment-16826951
 ] 

JMVM edited comment on KAFKA-8282 at 4/26/19 2:24 PM:
--

Hi [~wushujames]

We already broker quota configuration in place:

{code:java}
quota.consumer.default=9223372036854775807
quota.producer.default=9223372036854775807
quota.window.num=11
quota.window.size.seconds=1{code}

But I found they are deprecated, and maybe that's why quotas were unlimited and 
not facing JMX metrics.

I did alter config via kafka-configs.sh setting producer_byte_rate and 
consumer_byte_rate for users and clients, and now facing JMX metrics as 
expected.

Thanks




was (Author: jmvm):
Hi [~wushujames]

We already broker quota configuration in place:

{code:java}
quota.consumer.default=9223372036854775807
quota.producer.default=9223372036854775807
quota.window.num=11
quota.window.size.seconds=1{code}

But I found they are deprecated, and that's why quotas were unlimited and not 
facing JMX metrics.

I did alter config via kafka-configs.sh setting producer_byte_rate and 
consumer_byte_rate for users and clients, and now facing JMX metrics as 
expected.

Thanks



> Missing JMX bandwidth quota metrics for Produce and Fetch
> -
>
> Key: KAFKA-8282
> URL: https://issues.apache.org/jira/browse/KAFKA-8282
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: JMVM
>Priority: Major
> Attachments: Screen Shot 2019-04-23 at 20.59.21.png
>
>
> Recently I performed several *rolling upgrades following official steps* for 
> our Kafka brokers *from 0.11.0.1 to newer versions in different 
> environments*, and apparently working fine in all cases from functional point 
> of view: *producers and consumers working as expected*. 
> Specifically, I upgraded:
>  # From *0.11.0.1 to 1.0.0*, and then from *1.0.0 to 2.0.0*, and then *to* 
> *2.1.1*
>  # *From 0.11.0.1 directly to 2.1.1*
> However, in all cases *JMX bandwidth quota metrics for Fetch and Produce* 
> which used to show *all producers and consumers working with brokers* are 
> gone, just showing queue-size, in our *JMX monitoring clients specifically 
> Introscope Wily* *keeping same configuration* (see attached image).
> In fact, I removed Wily filter configuration for JMX in *order to show all 
> possible metrics, and keeping both Fetch and Produce still gone*.
> Note I checked if having proper version after rolling upgrade, for example, 
> for *2.1.1*, and being as expected:
> *ll /opt/kafka/libs/*
> *total 54032*
> *-rw-r--r-- 1 kafka kafka    69409 Jan  4 08:42 activation-1.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    14768 Jan  4 08:42 
> aopalliance-repackaged-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    90347 Jan  4 08:42 argparse4j-0.7.0.jar*
> *-rw-r--r-- 1 kafka kafka    20437 Jan  4 08:40 
> audience-annotations-0.5.0.jar*
> *-rw-r--r-- 1 kafka kafka   501879 Jan  4 08:43 commons-lang3-3.8.1.jar*
> *-rw-r--r-- 1 kafka kafka    96801 Feb  8 18:32 connect-api-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    18265 Feb  8 18:32 
> connect-basic-auth-extension-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    20509 Feb  8 18:32 connect-file-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    45489 Feb  8 18:32 connect-json-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka   466588 Feb  8 18:32 connect-runtime-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    90358 Feb  8 18:32 connect-transforms-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka  2442625 Jan  4 08:43 guava-20.0.jar*
> *-rw-r--r-- 1 kafka kafka   186763 Jan  4 08:42 hk2-api-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka   189454 Jan  4 08:42 hk2-locator-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka   135317 Jan  4 08:42 hk2-utils-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    66894 Jan 11 21:28 jackson-annotations-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka   325619 Jan 11 21:27 jackson-core-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka  1347236 Jan 11 21:27 jackson-databind-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    32373 Jan 11 21:28 jackson-jaxrs-base-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    15861 Jan 11 21:28 
> jackson-jaxrs-json-provider-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    32627 Jan 11 21:28 
> jackson-module-jaxb-annotations-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka   737884 Jan  4 08:43 javassist-3.22.0-CR2.jar*
> *-rw-r--r-- 1 kafka kafka    26366 Jan  4 08:42 javax.annotation-api-1.2.jar*
> *-rw-r--r-- 1 kafka kafka 2497 Jan  4 08:42 javax.inject-1.jar*
> *-rw-r--r-- 1 kafka kafka 5951 Jan  4 08:42 javax.inject-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    95806 Jan  4 08:42 javax.servlet-api-3.1.0.jar*
> *-rw-r--r-- 1 kafka kafka   126898 Jan  4 08:42 javax.ws.rs-api-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka   127509 Jan  4 08:42 javax.ws.rs-api-2.1.jar*
> *-rw-r--r-- 1 kafka kafka   125632 J

[jira] [Comment Edited] (KAFKA-8282) Missing JMX bandwidth quota metrics for Produce and Fetch

2019-04-26 Thread JMVM (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826951#comment-16826951
 ] 

JMVM edited comment on KAFKA-8282 at 4/26/19 2:24 PM:
--

Hi [~wushujames]

We already broker quota configuration in place:

{code:java}
quota.consumer.default=9223372036854775807
quota.producer.default=9223372036854775807
quota.window.num=11
quota.window.size.seconds=1{code}

But I found they are deprecated, and maybe that's why quotas were unlimited and 
not facing JMX metrics.

I did alter config via kafka-configs.sh setting producer_byte_rate and 
consumer_byte_rate for users and clients, and now facing JMX metrics as 
expected.

Could you confirm my assumption? If so, you can resolve ticket.

Thanks




was (Author: jmvm):
Hi [~wushujames]

We already broker quota configuration in place:

{code:java}
quota.consumer.default=9223372036854775807
quota.producer.default=9223372036854775807
quota.window.num=11
quota.window.size.seconds=1{code}

But I found they are deprecated, and maybe that's why quotas were unlimited and 
not facing JMX metrics.

I did alter config via kafka-configs.sh setting producer_byte_rate and 
consumer_byte_rate for users and clients, and now facing JMX metrics as 
expected.

Thanks



> Missing JMX bandwidth quota metrics for Produce and Fetch
> -
>
> Key: KAFKA-8282
> URL: https://issues.apache.org/jira/browse/KAFKA-8282
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: JMVM
>Priority: Major
> Attachments: Screen Shot 2019-04-23 at 20.59.21.png
>
>
> Recently I performed several *rolling upgrades following official steps* for 
> our Kafka brokers *from 0.11.0.1 to newer versions in different 
> environments*, and apparently working fine in all cases from functional point 
> of view: *producers and consumers working as expected*. 
> Specifically, I upgraded:
>  # From *0.11.0.1 to 1.0.0*, and then from *1.0.0 to 2.0.0*, and then *to* 
> *2.1.1*
>  # *From 0.11.0.1 directly to 2.1.1*
> However, in all cases *JMX bandwidth quota metrics for Fetch and Produce* 
> which used to show *all producers and consumers working with brokers* are 
> gone, just showing queue-size, in our *JMX monitoring clients specifically 
> Introscope Wily* *keeping same configuration* (see attached image).
> In fact, I removed Wily filter configuration for JMX in *order to show all 
> possible metrics, and keeping both Fetch and Produce still gone*.
> Note I checked if having proper version after rolling upgrade, for example, 
> for *2.1.1*, and being as expected:
> *ll /opt/kafka/libs/*
> *total 54032*
> *-rw-r--r-- 1 kafka kafka    69409 Jan  4 08:42 activation-1.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    14768 Jan  4 08:42 
> aopalliance-repackaged-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    90347 Jan  4 08:42 argparse4j-0.7.0.jar*
> *-rw-r--r-- 1 kafka kafka    20437 Jan  4 08:40 
> audience-annotations-0.5.0.jar*
> *-rw-r--r-- 1 kafka kafka   501879 Jan  4 08:43 commons-lang3-3.8.1.jar*
> *-rw-r--r-- 1 kafka kafka    96801 Feb  8 18:32 connect-api-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    18265 Feb  8 18:32 
> connect-basic-auth-extension-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    20509 Feb  8 18:32 connect-file-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    45489 Feb  8 18:32 connect-json-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka   466588 Feb  8 18:32 connect-runtime-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    90358 Feb  8 18:32 connect-transforms-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka  2442625 Jan  4 08:43 guava-20.0.jar*
> *-rw-r--r-- 1 kafka kafka   186763 Jan  4 08:42 hk2-api-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka   189454 Jan  4 08:42 hk2-locator-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka   135317 Jan  4 08:42 hk2-utils-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    66894 Jan 11 21:28 jackson-annotations-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka   325619 Jan 11 21:27 jackson-core-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka  1347236 Jan 11 21:27 jackson-databind-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    32373 Jan 11 21:28 jackson-jaxrs-base-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    15861 Jan 11 21:28 
> jackson-jaxrs-json-provider-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    32627 Jan 11 21:28 
> jackson-module-jaxb-annotations-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka   737884 Jan  4 08:43 javassist-3.22.0-CR2.jar*
> *-rw-r--r-- 1 kafka kafka    26366 Jan  4 08:42 javax.annotation-api-1.2.jar*
> *-rw-r--r-- 1 kafka kafka 2497 Jan  4 08:42 javax.inject-1.jar*
> *-rw-r--r-- 1 kafka kafka 5951 Jan  4 08:42 javax.inject-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    95806 Jan  4 08:42 javax.servlet-api-3.1.0.jar*
> *-rw-r--r-- 1 kafka kafka   126898 Jan  4 08:42 javax.ws.rs-api-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka   127509 Jan  

[jira] [Commented] (KAFKA-8282) Missing JMX bandwidth quota metrics for Produce and Fetch

2019-04-26 Thread JMVM (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826951#comment-16826951
 ] 

JMVM commented on KAFKA-8282:
-

Hi [~wushujames]

We already broker quota configuration in place:

{code:java}
quota.consumer.default=9223372036854775807
quota.producer.default=9223372036854775807
quota.window.num=11
quota.window.size.seconds=1{code}

But I think they are deprecated, and that's why quotas were unlimited and not 
facing JMX metrics.

I did alter config via kafka-configs.sh setting producer_byte_rate and 
consumer_byte_rate for users and clients, and now facing JMX metrics as 
expected.

Thanks



> Missing JMX bandwidth quota metrics for Produce and Fetch
> -
>
> Key: KAFKA-8282
> URL: https://issues.apache.org/jira/browse/KAFKA-8282
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.1.1
>Reporter: JMVM
>Priority: Major
> Attachments: Screen Shot 2019-04-23 at 20.59.21.png
>
>
> Recently I performed several *rolling upgrades following official steps* for 
> our Kafka brokers *from 0.11.0.1 to newer versions in different 
> environments*, and apparently working fine in all cases from functional point 
> of view: *producers and consumers working as expected*. 
> Specifically, I upgraded:
>  # From *0.11.0.1 to 1.0.0*, and then from *1.0.0 to 2.0.0*, and then *to* 
> *2.1.1*
>  # *From 0.11.0.1 directly to 2.1.1*
> However, in all cases *JMX bandwidth quota metrics for Fetch and Produce* 
> which used to show *all producers and consumers working with brokers* are 
> gone, just showing queue-size, in our *JMX monitoring clients specifically 
> Introscope Wily* *keeping same configuration* (see attached image).
> In fact, I removed Wily filter configuration for JMX in *order to show all 
> possible metrics, and keeping both Fetch and Produce still gone*.
> Note I checked if having proper version after rolling upgrade, for example, 
> for *2.1.1*, and being as expected:
> *ll /opt/kafka/libs/*
> *total 54032*
> *-rw-r--r-- 1 kafka kafka    69409 Jan  4 08:42 activation-1.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    14768 Jan  4 08:42 
> aopalliance-repackaged-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    90347 Jan  4 08:42 argparse4j-0.7.0.jar*
> *-rw-r--r-- 1 kafka kafka    20437 Jan  4 08:40 
> audience-annotations-0.5.0.jar*
> *-rw-r--r-- 1 kafka kafka   501879 Jan  4 08:43 commons-lang3-3.8.1.jar*
> *-rw-r--r-- 1 kafka kafka    96801 Feb  8 18:32 connect-api-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    18265 Feb  8 18:32 
> connect-basic-auth-extension-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    20509 Feb  8 18:32 connect-file-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    45489 Feb  8 18:32 connect-json-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka   466588 Feb  8 18:32 connect-runtime-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka    90358 Feb  8 18:32 connect-transforms-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka  2442625 Jan  4 08:43 guava-20.0.jar*
> *-rw-r--r-- 1 kafka kafka   186763 Jan  4 08:42 hk2-api-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka   189454 Jan  4 08:42 hk2-locator-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka   135317 Jan  4 08:42 hk2-utils-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    66894 Jan 11 21:28 jackson-annotations-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka   325619 Jan 11 21:27 jackson-core-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka  1347236 Jan 11 21:27 jackson-databind-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    32373 Jan 11 21:28 jackson-jaxrs-base-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    15861 Jan 11 21:28 
> jackson-jaxrs-json-provider-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka    32627 Jan 11 21:28 
> jackson-module-jaxb-annotations-2.9.8.jar*
> *-rw-r--r-- 1 kafka kafka   737884 Jan  4 08:43 javassist-3.22.0-CR2.jar*
> *-rw-r--r-- 1 kafka kafka    26366 Jan  4 08:42 javax.annotation-api-1.2.jar*
> *-rw-r--r-- 1 kafka kafka 2497 Jan  4 08:42 javax.inject-1.jar*
> *-rw-r--r-- 1 kafka kafka 5951 Jan  4 08:42 javax.inject-2.5.0-b42.jar*
> *-rw-r--r-- 1 kafka kafka    95806 Jan  4 08:42 javax.servlet-api-3.1.0.jar*
> *-rw-r--r-- 1 kafka kafka   126898 Jan  4 08:42 javax.ws.rs-api-2.1.1.jar*
> *-rw-r--r-- 1 kafka kafka   127509 Jan  4 08:42 javax.ws.rs-api-2.1.jar*
> *-rw-r--r-- 1 kafka kafka   125632 Jan  4 08:42 jaxb-api-2.3.0.jar*
> *-rw-r--r-- 1 kafka kafka   181563 Jan  4 08:42 jersey-client-2.27.jar*
> *-rw-r--r-- 1 kafka kafka  1140395 Jan  4 08:43 jersey-common-2.27.jar*
> *-rw-r--r-- 1 kafka kafka    18085 Jan  4 08:42 
> jersey-container-servlet-2.27.jar*
> *-rw-r--r-- 1 kafka kafka    59332 Jan  4 08:42 
> jersey-container-servlet-core-2.27.jar*
> *-rw-r--r-- 1 kafka kafka    62547 Jan  4 08:42 jersey-hk2-2.27.jar*
> *-rw-r--r-- 1 kafka kafka    71936 Jan  4 08:42 jersey-media-jaxb-2.27.jar*
> *-rw-r--r-- 1 kafka kafka   933619 Jan  4 08:43 je

[jira] [Commented] (KAFKA-2729) Cached zkVersion not equal to that in zookeeper, broker not recovering.

2019-04-26 Thread Shawn YUAN (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-2729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826946#comment-16826946
 ] 

Shawn YUAN commented on KAFKA-2729:
---

Thank you [~DEvil], I'm searching for if any git commits diff patch for 
fixing.

> Cached zkVersion not equal to that in zookeeper, broker not recovering.
> ---
>
> Key: KAFKA-2729
> URL: https://issues.apache.org/jira/browse/KAFKA-2729
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1, 0.9.0.0, 0.10.0.0, 0.10.1.0, 0.11.0.0
>Reporter: Danil Serdyuchenko
>Assignee: Onur Karaman
>Priority: Major
> Fix For: 1.1.0
>
>
> After a small network wobble where zookeeper nodes couldn't reach each other, 
> we started seeing a large number of undereplicated partitions. The zookeeper 
> cluster recovered, however we continued to see a large number of 
> undereplicated partitions. Two brokers in the kafka cluster were showing this 
> in the logs:
> {code}
> [2015-10-27 11:36:00,888] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Shrinking ISR for 
> partition [__samza_checkpoint_event-creation_1,3] from 6,5 to 5 
> (kafka.cluster.Partition)
> [2015-10-27 11:36:00,891] INFO Partition 
> [__samza_checkpoint_event-creation_1,3] on broker 5: Cached zkVersion [66] 
> not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition)
> {code}
> For all of the topics on the effected brokers. Both brokers only recovered 
> after a restart. Our own investigation yielded nothing, I was hoping you 
> could shed some light on this issue. Possibly if it's related to: 
> https://issues.apache.org/jira/browse/KAFKA-1382 , however we're using 
> 0.8.2.1.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-26 Thread Matthias J. Sax (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826796#comment-16826796
 ] 

Matthias J. Sax commented on KAFKA-8289:


[~vvcephei] Assigning this to you for visibility.

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Major
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107476398},k=B,v=11
> 2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107501398},k=B,v=12
> 2019-04-24 20:05:26.075  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>

[jira] [Updated] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-26 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax updated KAFKA-8289:
---
Priority: Major  (was: Blocker)

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Major
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107476398},k=B,v=11
> 2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107501398},k=B,v=12
> 2019-04-24 20:05:26.075  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:44.598  INFO --- [

[jira] [Assigned] (KAFKA-8289) KTable, Long> can't be suppressed

2019-04-26 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax reassigned KAFKA-8289:
--

Assignee: John Roesler

> KTable, Long>  can't be suppressed
> ---
>
> Key: KAFKA-8289
> URL: https://issues.apache.org/jira/browse/KAFKA-8289
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.2.0
> Environment: Broker on a Linux, stream app on my win10 laptop. 
> I add one row log.message.timestamp.type=LogAppendTime to my broker's 
> server.properties. stream app all default config.
>Reporter: Xiaolin Jia
>Assignee: John Roesler
>Priority: Blocker
>
> I write a simple stream app followed official developer guide [Stream 
> DSL|[https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#window-final-results]].
>  but I got more than one [Window Final 
> Results|https://kafka.apache.org/22/documentation/streams/developer-guide/dsl-api.html#id31]
>  from a session time window.
> time ticker A -> (4,A) / 25s,
> time ticker B -> (4, B) / 25s  all send to the same topic 
> below is my stream app code 
> {code:java}
> kstreams[0]
> .peek((k, v) -> log.info("--> ping, k={},v={}", k, v))
> .groupBy((k, v) -> v, Grouped.with(Serdes.String(), Serdes.String()))
> .windowedBy(SessionWindows.with(Duration.ofSeconds(100)).grace(Duration.ofMillis(20)))
> .count()
> .suppress(Suppressed.untilWindowCloses(BufferConfig.unbounded()))
> .toStream().peek((k, v) -> log.info("window={},k={},v={}", k.window(), 
> k.key(), v));
> {code}
> {{here is my log print}}
> {noformat}
> 2019-04-24 20:00:26.142  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:00:47.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556106587744, 
> endMs=1556107129191},k=A,v=20
> 2019-04-24 20:00:51.071  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:16.065  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:01:41.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:06.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:31.066  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:02:56.208  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:21.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:03:46.078  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:04.684  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:11.069  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:19.371  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107426409},k=B,v=9
> 2019-04-24 20:04:19.372  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107445012},k=A,v=1
> 2019-04-24 20:04:29.604  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:04:36.067  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:04:49.715  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107451397},k=B,v=10
> 2019-04-24 20:04:49.716  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107445012, 
> endMs=1556107469935},k=A,v=2
> 2019-04-24 20:04:54.593  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:01.070  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:19.599  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=A
> 2019-04-24 20:05:20.045  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107476398},k=B,v=11
> 2019-04-24 20:05:20.047  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : window=Window{startMs=1556107226473, 
> endMs=1556107501398},k=B,v=12
> 2019-04-24 20:05:26.075  INFO --- [-StreamThread-1] c.g.k.AppStreams  
>   : --> ping, k=4,v=B
> 2019-04-24 20:05:44.598  INFO --- [

[jira] [Commented] (KAFKA-8200) TopologyTestDriver should offer an iterable signature of readOutput

2019-04-26 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826720#comment-16826720
 ] 

ASF GitHub Bot commented on KAFKA-8200:
---

pkleindl commented on pull request #6556: KAFKA-8200: added Iterator methods 
for output to TopologyTestDriver
URL: https://github.com/apache/kafka/pull/6556
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> TopologyTestDriver should offer an iterable signature of readOutput
> ---
>
> Key: KAFKA-8200
> URL: https://issues.apache.org/jira/browse/KAFKA-8200
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Michael Drogalis
>Assignee: Patrik Kleindl
>Priority: Minor
>  Labels: needs-kip
>
> When using the TopologyTestDriver, one examines the output on a topic with 
> the readOutput method. This method returns one record at a time, until no 
> more records can be found, at which point in returns null.
> Many times, the usage pattern around readOutput will involve writing a loop 
> to extract a number of records from the topic, building up a list of records, 
> until it returns null.
> It would be helpful to offer an iterable signature of readOutput, which 
> returns either an iterator or list over the records that are currently 
> available in the topic. This would effectively remove the loop that a user 
> needs to write for him/herself each time.
> Such a signature might look like:
> {code:java}
> public Iterable> readOutput(java.lang.String 
> topic);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (KAFKA-8200) TopologyTestDriver should offer an iterable signature of readOutput

2019-04-26 Thread Patrik Kleindl (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrik Kleindl closed KAFKA-8200.
-

> TopologyTestDriver should offer an iterable signature of readOutput
> ---
>
> Key: KAFKA-8200
> URL: https://issues.apache.org/jira/browse/KAFKA-8200
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Michael Drogalis
>Assignee: Patrik Kleindl
>Priority: Minor
>  Labels: needs-kip
>
> When using the TopologyTestDriver, one examines the output on a topic with 
> the readOutput method. This method returns one record at a time, until no 
> more records can be found, at which point in returns null.
> Many times, the usage pattern around readOutput will involve writing a loop 
> to extract a number of records from the topic, building up a list of records, 
> until it returns null.
> It would be helpful to offer an iterable signature of readOutput, which 
> returns either an iterator or list over the records that are currently 
> available in the topic. This would effectively remove the loop that a user 
> needs to write for him/herself each time.
> Such a signature might look like:
> {code:java}
> public Iterable> readOutput(java.lang.String 
> topic);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8200) TopologyTestDriver should offer an iterable signature of readOutput

2019-04-26 Thread Patrik Kleindl (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrik Kleindl resolved KAFKA-8200.
---
Resolution: Won't Do

Discarded in favor of 
[https://cwiki.apache.org/confluence/display/KAFKA/KIP-456%3A+Helper+classes+to+make+it+simpler+to+write+test+logic+with+TopologyTestDriver]

> TopologyTestDriver should offer an iterable signature of readOutput
> ---
>
> Key: KAFKA-8200
> URL: https://issues.apache.org/jira/browse/KAFKA-8200
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Michael Drogalis
>Assignee: Patrik Kleindl
>Priority: Minor
>  Labels: needs-kip
>
> When using the TopologyTestDriver, one examines the output on a topic with 
> the readOutput method. This method returns one record at a time, until no 
> more records can be found, at which point in returns null.
> Many times, the usage pattern around readOutput will involve writing a loop 
> to extract a number of records from the topic, building up a list of records, 
> until it returns null.
> It would be helpful to offer an iterable signature of readOutput, which 
> returns either an iterator or list over the records that are currently 
> available in the topic. This would effectively remove the loop that a user 
> needs to write for him/herself each time.
> Such a signature might look like:
> {code:java}
> public Iterable> readOutput(java.lang.String 
> topic);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)