[jira] [Commented] (KAFKA-3101) Optimize Aggregation Outputs

2016-07-06 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364919#comment-15364919
 ] 

Bill Bejeck commented on KAFKA-3101:


Is this available now? If so I'd like to pick this up if possible.

> Optimize Aggregation Outputs
> 
>
> Key: KAFKA-3101
> URL: https://issues.apache.org/jira/browse/KAFKA-3101
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Today we emit one output record for each incoming message for Table / 
> Windowed Stream Aggregations. For example, say we have a sequence of 
> aggregate outputs computed from the input stream (assuming there is no agg 
> value for this key before):
> V1, V2, V3, V4, V5
> Then the aggregator will output the following sequence of Change oldValue>:
> , , , , 
> where could cost a lot of CPU overhead computing the intermediate results. 
> Instead if we can let the underlying state store to "remember" the last 
> emitted old value, we can reduce the number of emits based on some configs. 
> More specifically, we can add one more field in the KV store engine storing 
> the last emitted old value, which only get updated when we emit to the 
> downstream processor. For example:
> At Beginning: 
> Store: key => empty (no agg values yet)
> V1 computed: 
> Update Both in Store: key => (V1, V1), Emit 
> V2 computed: 
> Update NewValue in Store: key => (V2, V1), No Emit
> V3 computed: 
> Update NewValue in Store: key => (V3, V1), No Emit
> V4 computed: 
> Update Both in Store: key => (V4, V4), Emit 
> V5 computed: 
> Update NewValue in Store: key => (V5, V4), No Emit
> One more thing to consider is that, we need a "closing" time control on the 
> not-yet-emitted keys; when some time has elapsed (or the window is to be 
> closed), we need to check for any key if their current materialized pairs 
> have not been emitted (for example  in the above example). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3101) Optimize Aggregation Outputs

2016-07-07 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366148#comment-15366148
 ] 

Bill Bejeck commented on KAFKA-3101:


[~enothereska] ok I get it now, KAFKA-3101 is being replaced/superseded by 
KAFKA-3776. Thanks for the heads up. 

> Optimize Aggregation Outputs
> 
>
> Key: KAFKA-3101
> URL: https://issues.apache.org/jira/browse/KAFKA-3101
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Today we emit one output record for each incoming message for Table / 
> Windowed Stream Aggregations. For example, say we have a sequence of 
> aggregate outputs computed from the input stream (assuming there is no agg 
> value for this key before):
> V1, V2, V3, V4, V5
> Then the aggregator will output the following sequence of Change oldValue>:
> , , , , 
> where could cost a lot of CPU overhead computing the intermediate results. 
> Instead if we can let the underlying state store to "remember" the last 
> emitted old value, we can reduce the number of emits based on some configs. 
> More specifically, we can add one more field in the KV store engine storing 
> the last emitted old value, which only get updated when we emit to the 
> downstream processor. For example:
> At Beginning: 
> Store: key => empty (no agg values yet)
> V1 computed: 
> Update Both in Store: key => (V1, V1), Emit 
> V2 computed: 
> Update NewValue in Store: key => (V2, V1), No Emit
> V3 computed: 
> Update NewValue in Store: key => (V3, V1), No Emit
> V4 computed: 
> Update Both in Store: key => (V4, V4), Emit 
> V5 computed: 
> Update NewValue in Store: key => (V5, V4), No Emit
> One more thing to consider is that, we need a "closing" time control on the 
> not-yet-emitted keys; when some time has elapsed (or the window is to be 
> closed), we need to check for any key if their current materialized pairs 
> have not been emitted (for example  in the above example). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3101) Optimize Aggregation Outputs

2016-07-07 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366591#comment-15366591
 ] 

Bill Bejeck commented on KAFKA-3101:


[~guozhang] ok will do.  

> Optimize Aggregation Outputs
> 
>
> Key: KAFKA-3101
> URL: https://issues.apache.org/jira/browse/KAFKA-3101
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Today we emit one output record for each incoming message for Table / 
> Windowed Stream Aggregations. For example, say we have a sequence of 
> aggregate outputs computed from the input stream (assuming there is no agg 
> value for this key before):
> V1, V2, V3, V4, V5
> Then the aggregator will output the following sequence of Change oldValue>:
> , , , , 
> where could cost a lot of CPU overhead computing the intermediate results. 
> Instead if we can let the underlying state store to "remember" the last 
> emitted old value, we can reduce the number of emits based on some configs. 
> More specifically, we can add one more field in the KV store engine storing 
> the last emitted old value, which only get updated when we emit to the 
> downstream processor. For example:
> At Beginning: 
> Store: key => empty (no agg values yet)
> V1 computed: 
> Update Both in Store: key => (V1, V1), Emit 
> V2 computed: 
> Update NewValue in Store: key => (V2, V1), No Emit
> V3 computed: 
> Update NewValue in Store: key => (V3, V1), No Emit
> V4 computed: 
> Update Both in Store: key => (V4, V4), Emit 
> V5 computed: 
> Update NewValue in Store: key => (V5, V4), No Emit
> One more thing to consider is that, we need a "closing" time control on the 
> not-yet-emitted keys; when some time has elapsed (or the window is to be 
> closed), we need to check for any key if their current materialized pairs 
> have not been emitted (for example  in the above example). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3101) Optimize Aggregation Outputs

2016-07-08 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15367923#comment-15367923
 ] 

Bill Bejeck commented on KAFKA-3101:


[~guozhang] [~enothereska] 

Would adding flatbuffers (https://google.github.io/flatbuffers/) be beyond the 
scope of this performance comparison?

> Optimize Aggregation Outputs
> 
>
> Key: KAFKA-3101
> URL: https://issues.apache.org/jira/browse/KAFKA-3101
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Today we emit one output record for each incoming message for Table / 
> Windowed Stream Aggregations. For example, say we have a sequence of 
> aggregate outputs computed from the input stream (assuming there is no agg 
> value for this key before):
> V1, V2, V3, V4, V5
> Then the aggregator will output the following sequence of Change oldValue>:
> , , , , 
> where could cost a lot of CPU overhead computing the intermediate results. 
> Instead if we can let the underlying state store to "remember" the last 
> emitted old value, we can reduce the number of emits based on some configs. 
> More specifically, we can add one more field in the KV store engine storing 
> the last emitted old value, which only get updated when we emit to the 
> downstream processor. For example:
> At Beginning: 
> Store: key => empty (no agg values yet)
> V1 computed: 
> Update Both in Store: key => (V1, V1), Emit 
> V2 computed: 
> Update NewValue in Store: key => (V2, V1), No Emit
> V3 computed: 
> Update NewValue in Store: key => (V3, V1), No Emit
> V4 computed: 
> Update Both in Store: key => (V4, V4), Emit 
> V5 computed: 
> Update NewValue in Store: key => (V5, V4), No Emit
> One more thing to consider is that, we need a "closing" time control on the 
> not-yet-emitted keys; when some time has elapsed (or the window is to be 
> closed), we need to check for any key if their current materialized pairs 
> have not been emitted (for example  in the above example). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3101) Optimize Aggregation Outputs

2016-07-09 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15369257#comment-15369257
 ] 

Bill Bejeck commented on KAFKA-3101:


[~enothereska] none actually, a misunderstanding on my part.

> Optimize Aggregation Outputs
> 
>
> Key: KAFKA-3101
> URL: https://issues.apache.org/jira/browse/KAFKA-3101
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Today we emit one output record for each incoming message for Table / 
> Windowed Stream Aggregations. For example, say we have a sequence of 
> aggregate outputs computed from the input stream (assuming there is no agg 
> value for this key before):
> V1, V2, V3, V4, V5
> Then the aggregator will output the following sequence of Change oldValue>:
> , , , , 
> where could cost a lot of CPU overhead computing the intermediate results. 
> Instead if we can let the underlying state store to "remember" the last 
> emitted old value, we can reduce the number of emits based on some configs. 
> More specifically, we can add one more field in the KV store engine storing 
> the last emitted old value, which only get updated when we emit to the 
> downstream processor. For example:
> At Beginning: 
> Store: key => empty (no agg values yet)
> V1 computed: 
> Update Both in Store: key => (V1, V1), Emit 
> V2 computed: 
> Update NewValue in Store: key => (V2, V1), No Emit
> V3 computed: 
> Update NewValue in Store: key => (V3, V1), No Emit
> V4 computed: 
> Update Both in Store: key => (V4, V4), Emit 
> V5 computed: 
> Update NewValue in Store: key => (V5, V4), No Emit
> One more thing to consider is that, we need a "closing" time control on the 
> not-yet-emitted keys; when some time has elapsed (or the window is to be 
> closed), we need to check for any key if their current materialized pairs 
> have not been emitted (for example  in the above example). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3101) Optimize Aggregation Outputs

2016-07-09 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15369257#comment-15369257
 ] 

Bill Bejeck edited comment on KAFKA-3101 at 7/9/16 7:26 PM:


[~enothereska] none actually, a misunderstanding on my part. 

Thanks,
-Bill


was (Author: bbejeck):
[~enothereska] none actually, a misunderstanding on my part.

> Optimize Aggregation Outputs
> 
>
> Key: KAFKA-3101
> URL: https://issues.apache.org/jira/browse/KAFKA-3101
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Today we emit one output record for each incoming message for Table / 
> Windowed Stream Aggregations. For example, say we have a sequence of 
> aggregate outputs computed from the input stream (assuming there is no agg 
> value for this key before):
> V1, V2, V3, V4, V5
> Then the aggregator will output the following sequence of Change oldValue>:
> , , , , 
> where could cost a lot of CPU overhead computing the intermediate results. 
> Instead if we can let the underlying state store to "remember" the last 
> emitted old value, we can reduce the number of emits based on some configs. 
> More specifically, we can add one more field in the KV store engine storing 
> the last emitted old value, which only get updated when we emit to the 
> downstream processor. For example:
> At Beginning: 
> Store: key => empty (no agg values yet)
> V1 computed: 
> Update Both in Store: key => (V1, V1), Emit 
> V2 computed: 
> Update NewValue in Store: key => (V2, V1), No Emit
> V3 computed: 
> Update NewValue in Store: key => (V3, V1), No Emit
> V4 computed: 
> Update Both in Store: key => (V4, V4), Emit 
> V5 computed: 
> Update NewValue in Store: key => (V5, V4), No Emit
> One more thing to consider is that, we need a "closing" time control on the 
> not-yet-emitted keys; when some time has elapsed (or the window is to be 
> closed), we need to check for any key if their current materialized pairs 
> have not been emitted (for example  in the above example). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3101) Optimize Aggregation Outputs

2016-07-12 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15374085#comment-15374085
 ] 

Bill Bejeck commented on KAFKA-3101:


[~enothereska] [~guozhang]

With regards to the performance comparison here is my plan:  
 Create a simple streams process with  KTableAggregate utilizing 
Objects(records) first then bytes.
 Track the memory usage via the jamm library referenced above.
 Track message throughput for both types (records vs bytes).
 Profile how much CPU time is spent in the serialization/deserialization 
process.

Is this reasonable?  Any additional thoughts or comments?

> Optimize Aggregation Outputs
> 
>
> Key: KAFKA-3101
> URL: https://issues.apache.org/jira/browse/KAFKA-3101
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Today we emit one output record for each incoming message for Table / 
> Windowed Stream Aggregations. For example, say we have a sequence of 
> aggregate outputs computed from the input stream (assuming there is no agg 
> value for this key before):
> V1, V2, V3, V4, V5
> Then the aggregator will output the following sequence of Change oldValue>:
> , , , , 
> where could cost a lot of CPU overhead computing the intermediate results. 
> Instead if we can let the underlying state store to "remember" the last 
> emitted old value, we can reduce the number of emits based on some configs. 
> More specifically, we can add one more field in the KV store engine storing 
> the last emitted old value, which only get updated when we emit to the 
> downstream processor. For example:
> At Beginning: 
> Store: key => empty (no agg values yet)
> V1 computed: 
> Update Both in Store: key => (V1, V1), Emit 
> V2 computed: 
> Update NewValue in Store: key => (V2, V1), No Emit
> V3 computed: 
> Update NewValue in Store: key => (V3, V1), No Emit
> V4 computed: 
> Update Both in Store: key => (V4, V4), Emit 
> V5 computed: 
> Update NewValue in Store: key => (V5, V4), No Emit
> One more thing to consider is that, we need a "closing" time control on the 
> not-yet-emitted keys; when some time has elapsed (or the window is to be 
> closed), we need to check for any key if their current materialized pairs 
> have not been emitted (for example  in the above example). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-07-22 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3973 started by Bill Bejeck.
--
> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: https://issues.apache.org/jira/browse/KAFKA-3973
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
>
> Currently the cache stores and accounts for records, not bytes or objects. 
> This investigation would be around measuring any performance overheads that 
> come from storing bytes or objects. As an outcome we should know whether 1) 
> we should store bytes or 2) we should store objects. 
> If we store objects, the cache still needs to know their size (so that it can 
> know if the object fits in the allocated cache space, e.g., if the cache is 
> 100MB and the object is 10MB, we'd have space for 10 such objects). The 
> investigation needs to figure out how to find out the size of the object 
> efficiently in Java.
> If we store bytes, then we are serialising an object into bytes before 
> caching it, i.e., we take a serialisation cost. The investigation needs 
> measure how bad this cost can be especially for the case when all objects fit 
> in cache (and thus any extra serialisation cost would show).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-07-24 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-3973:
---
Attachment: MemoryLRUCache.java
CachingPerformanceBenchmarks.java

Benchmark test and modified MemoryLRUCache for reference.

> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: https://issues.apache.org/jira/browse/KAFKA-3973
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
> Attachments: CachingPerformanceBenchmarks.java, MemoryLRUCache.java
>
>
> Currently the cache stores and accounts for records, not bytes or objects. 
> This investigation would be around measuring any performance overheads that 
> come from storing bytes or objects. As an outcome we should know whether 1) 
> we should store bytes or 2) we should store objects. 
> If we store objects, the cache still needs to know their size (so that it can 
> know if the object fits in the allocated cache space, e.g., if the cache is 
> 100MB and the object is 10MB, we'd have space for 10 such objects). The 
> investigation needs to figure out how to find out the size of the object 
> efficiently in Java.
> If we store bytes, then we are serialising an object into bytes before 
> caching it, i.e., we take a serialisation cost. The investigation needs 
> measure how bad this cost can be especially for the case when all objects fit 
> in cache (and thus any extra serialisation cost would show).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-07-24 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15391229#comment-15391229
 ] 

Bill Bejeck commented on KAFKA-3973:


The results for the LRU investigation are below. There were three types of 
measurements taken:

1. The current tracking max by size cache (Control)
2. The cache tracking size by max memory (Object). Both keys and values were 
used in keeping track of total memory.  The max size for the cache was 
calculated by multiplying the memory of a key/value pair (taken using the 
MemoryMeter class from the jamm library https://github.com/jbellis/jamm) by the 
max size specified in the Control/Bytes cache.
3. Storing bytes in the cache (Bytes).  The max size of the cache in this case 
was done by size.  Both keys and values are serialized/deserialized.
4. I have attached the benchmarking class and the modified MemoryLRUCache class 
for reference.


While complete accuracy in java benchmarking can be difficult to achieve, the 
results of these benchmarks are sufficient from the perspective of how the 
differnt approaches compare to each other.

The cache was set to a max size of 500,000 (or in the memory based cache 
500,000 * key/value memory size). Two rounds of 25 iterations each were run.  
In the first round 500,000 put/get combinations were performed to measure 
behaviour when all records could fit in the cache.  The second round had 
1,000,000 put/get combinations to measure performance with evictions.  There 
were also some benchmarks for raw serialization and memory tracking included as 
well.

As exepected Control group had the best performance.  The Object (memory 
tracking) was better than serialization only if the MemoryMeter.measure method 
was used.  However the MemoryMeter.measure only captures the amount of memory 
taken by the object itself, it does not take into account any other objects in 
the object graph.  For example here is debug statement showing the memory for 
the string "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Donec a 
porttitor felis. In vel dolor."


MemoryMeter.measure :
24

MemoryMeter.measureDeep :
root [java.lang.String] 232 bytes (24 bytes)
  |
  +--value [char[]] 208 bytes (208 bytes)

232

MemoryMeter.measure total ignores the char array hanging off String objects.  
With this in mind we would be forced to use MemoryMeter.measureDeep to get an 
accurate meausure of objects being placed in the cache.  From the results below 
the MemoryMeter.measureDeep method had the slowest performance.

With these results in mind, it looks to me like storing bytes in the cache is 
best going forward.  

Final notes
1. Another tool Java Object Layout 
(http://openjdk.java.net/projects/code-tools/jol/) shows promise, but needs 
evaluation.
2. These benchmarks should be re-written with JMH 
(http://openjdk.java.net/projects/code-tools/jmh/).  But using JMH requires a 
separate module at a minimum, but the JMH Gradle pluging 
(https://github.com/melix/jmh-gradle-plugin) looks interesting as it gives the 
ability to integrate JMH benchmarking tests into an existing project.  Having a 
place to write/run JMH benhmarks could be beneficial to the project as a whole. 
 If this seems worthwhile, I will create a Jira ticket and look into adding the 
JMH plugin, or creating a separate benchmarking module.
3. Probably should add a benchmarking test utilizing the MemoryLRUCache as well.

Investigation Results
Tests for 500,000 inserts 500K count/500K * memory  max cache size 

Control   500K cache put/get results 25 iterations ave time (millis) 53.24
Object500K cache put/get results 25 iterations ave time (millis) 250.88
Object(Deep)  500K cache put/get results 25 iterations ave time (millis) 1720.08
Bytes 500K cache put/get results 25 iterations ave time (millis) 288.92

Tests for 1,000,000 inserts 500K count/500K * memory  max cache size 

Control   1M cache put/get results 25 iterations ave time (millis) 227.48
Object1M cache put/get results 25 iterations ave time (millis) 488.2
Object(Deep)  1M cache put/get results 25 iterations ave time (millis) 2575.04
Bytes 1M cache put/get results 25 iterations ave time (millis) 852.04

Raw timing of tracking memory (deep) for 500K Strings
Took [567] millis to track memory

Raw timing of tracking memory for 500K Strings
Took [92] millis to track memory

Raw timing of tracking memory (deep) for 500K ComplexObjects
Took [2813] millis to track memory

Raw timing of tracking memory for 500K ComplexObjects
Took [148] millis to track memory

Raw timing of serialization for 500K Strings
Took [133] millis to serialize

Raw timing of serialization for 500K ComplexObjects
Took [525] millis to serialize






> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: h

[jira] [Commented] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-07-25 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15391766#comment-15391766
 ] 

Bill Bejeck commented on KAFKA-3973:


Yes I ran the tests using instrumentation (-javaagent:), sorry I forgot 
to put that in my original comments.

> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: https://issues.apache.org/jira/browse/KAFKA-3973
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
> Attachments: CachingPerformanceBenchmarks.java, MemoryLRUCache.java
>
>
> Currently the cache stores and accounts for records, not bytes or objects. 
> This investigation would be around measuring any performance overheads that 
> come from storing bytes or objects. As an outcome we should know whether 1) 
> we should store bytes or 2) we should store objects. 
> If we store objects, the cache still needs to know their size (so that it can 
> know if the object fits in the allocated cache space, e.g., if the cache is 
> 100MB and the object is 10MB, we'd have space for 10 such objects). The 
> investigation needs to figure out how to find out the size of the object 
> efficiently in Java.
> If we store bytes, then we are serialising an object into bytes before 
> caching it, i.e., we take a serialisation cost. The investigation needs 
> measure how bad this cost can be especially for the case when all objects fit 
> in cache (and thus any extra serialisation cost would show).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-07-25 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15393079#comment-15393079
 ] 

Bill Bejeck commented on KAFKA-3973:


[~enothereska]  Will do, just need to finish up a few details.

> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: https://issues.apache.org/jira/browse/KAFKA-3973
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
> Attachments: CachingPerformanceBenchmarks.java, MemoryLRUCache.java
>
>
> Currently the cache stores and accounts for records, not bytes or objects. 
> This investigation would be around measuring any performance overheads that 
> come from storing bytes or objects. As an outcome we should know whether 1) 
> we should store bytes or 2) we should store objects. 
> If we store objects, the cache still needs to know their size (so that it can 
> know if the object fits in the allocated cache space, e.g., if the cache is 
> 100MB and the object is 10MB, we'd have space for 10 such objects). The 
> investigation needs to figure out how to find out the size of the object 
> efficiently in Java.
> If we store bytes, then we are serialising an object into bytes before 
> caching it, i.e., we take a serialisation cost. The investigation needs 
> measure how bad this cost can be especially for the case when all objects fit 
> in cache (and thus any extra serialisation cost would show).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3989) Add JMH module for Benchmarks

2016-07-25 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-3989:
--

 Summary: Add JMH module for Benchmarks
 Key: KAFKA-3989
 URL: https://issues.apache.org/jira/browse/KAFKA-3989
 Project: Kafka
  Issue Type: Improvement
Reporter: Bill Bejeck
Assignee: Bill Bejeck


JMH is a Java harness for building, running, and analyzing benchmarks written 
in Java or JVM languages.  To run properly JMH needs to be in it's own module.  
 This task will also investigate using the jmh -gradle pluging 
[https://github.com/melix/jmh-gradle-plugin] which enables the use of JMH from 
gradle



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3989) Add JMH module for Benchmarks

2016-07-25 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-3989:
---
Description: JMH is a Java harness for building, running, and analyzing 
benchmarks written in Java or JVM languages.  To run properly JMH needs to be 
in it's own module.   This task will also investigate using the jmh -gradle 
pluging [https://github.com/melix/jmh-gradle-plugin] which enables the use of 
JMH from gradle.  This is related to 
[https://issues.apache.org/jira/browse/KAFKA-3973]  (was: JMH is a Java harness 
for building, running, and analyzing benchmarks written in Java or JVM 
languages.  To run properly JMH needs to be in it's own module.   This task 
will also investigate using the jmh -gradle pluging 
[https://github.com/melix/jmh-gradle-plugin] which enables the use of JMH from 
gradle)

> Add JMH module for Benchmarks
> -
>
> Key: KAFKA-3989
> URL: https://issues.apache.org/jira/browse/KAFKA-3989
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>
> JMH is a Java harness for building, running, and analyzing benchmarks written 
> in Java or JVM languages.  To run properly JMH needs to be in it's own 
> module.   This task will also investigate using the jmh -gradle pluging 
> [https://github.com/melix/jmh-gradle-plugin] which enables the use of JMH 
> from gradle.  This is related to 
> [https://issues.apache.org/jira/browse/KAFKA-3973]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-07-26 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15394761#comment-15394761
 ] 

Bill Bejeck commented on KAFKA-3973:


[~ijuma]  

I re-ran the tests with no instrumentation using the FALLBACK_UNSAFE enum, the 
results were the same if not slower.  The benchmark can be run now with no 
instrumentation.

> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: https://issues.apache.org/jira/browse/KAFKA-3973
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
> Attachments: CachingPerformanceBenchmarks.java, MemoryLRUCache.java
>
>
> Currently the cache stores and accounts for records, not bytes or objects. 
> This investigation would be around measuring any performance overheads that 
> come from storing bytes or objects. As an outcome we should know whether 1) 
> we should store bytes or 2) we should store objects. 
> If we store objects, the cache still needs to know their size (so that it can 
> know if the object fits in the allocated cache space, e.g., if the cache is 
> 100MB and the object is 10MB, we'd have space for 10 such objects). The 
> investigation needs to figure out how to find out the size of the object 
> efficiently in Java.
> If we store bytes, then we are serialising an object into bytes before 
> caching it, i.e., we take a serialisation cost. The investigation needs 
> measure how bad this cost can be especially for the case when all objects fit 
> in cache (and thus any extra serialisation cost would show).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-07-26 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-3973:
---
Status: Patch Available  (was: In Progress)

> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: https://issues.apache.org/jira/browse/KAFKA-3973
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
> Attachments: CachingPerformanceBenchmarks.java, MemoryLRUCache.java
>
>
> Currently the cache stores and accounts for records, not bytes or objects. 
> This investigation would be around measuring any performance overheads that 
> come from storing bytes or objects. As an outcome we should know whether 1) 
> we should store bytes or 2) we should store objects. 
> If we store objects, the cache still needs to know their size (so that it can 
> know if the object fits in the allocated cache space, e.g., if the cache is 
> 100MB and the object is 10MB, we'd have space for 10 such objects). The 
> investigation needs to figure out how to find out the size of the object 
> efficiently in Java.
> If we store bytes, then we are serialising an object into bytes before 
> caching it, i.e., we take a serialisation cost. The investigation needs 
> measure how bad this cost can be especially for the case when all objects fit 
> in cache (and thus any extra serialisation cost would show).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3989) Add JMH module for Benchmarks

2016-08-02 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3989 started by Bill Bejeck.
--
> Add JMH module for Benchmarks
> -
>
> Key: KAFKA-3989
> URL: https://issues.apache.org/jira/browse/KAFKA-3989
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>
> JMH is a Java harness for building, running, and analyzing benchmarks written 
> in Java or JVM languages.  To run properly JMH needs to be in it's own 
> module.   This task will also investigate using the jmh -gradle pluging 
> [https://github.com/melix/jmh-gradle-plugin] which enables the use of JMH 
> from gradle.  This is related to 
> [https://issues.apache.org/jira/browse/KAFKA-3973]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3989) Add JMH module for Benchmarks

2016-08-03 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15407054#comment-15407054
 ] 

Bill Bejeck commented on KAFKA-3989:


I opted to try the gradle shadow plugin first 
(https://github.com/johnrengelman/shadow/) 

I've been able to get JMH working as a sub-module but needed to make the 
following changes to the build.gradle script:

dependencies {  
  .
  // Added this entry for gradle shadow plugin
classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.3' //
  }

Then added this to apply plugin the jmh-benchmarks module:

project(':jmh-benchmarks') {
  apply plugin: 'com.github.johnrengelman.shadow'
}

Is this acceptable ? 


> Add JMH module for Benchmarks
> -
>
> Key: KAFKA-3989
> URL: https://issues.apache.org/jira/browse/KAFKA-3989
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>
> JMH is a Java harness for building, running, and analyzing benchmarks written 
> in Java or JVM languages.  To run properly JMH needs to be in it's own 
> module.   This task will also investigate using the jmh -gradle pluging 
> [https://github.com/melix/jmh-gradle-plugin] which enables the use of JMH 
> from gradle.  This is related to 
> [https://issues.apache.org/jira/browse/KAFKA-3973]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3989) Add JMH module for Benchmarks

2016-08-03 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15407054#comment-15407054
 ] 

Bill Bejeck edited comment on KAFKA-3989 at 8/4/16 3:03 AM:


I opted to try the gradle shadow plugin first 
(https://github.com/johnrengelman/shadow/) 

I've been able to get JMH working as a sub-module but needed to make the 
following changes to the build.gradle script:

`dependencies {  
  .
  // Added this entry for gradle shadow plugin
classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.3' //
  }
`
Then added this to apply plugin the jmh-benchmarks module:

project(':jmh-benchmarks') {
  apply plugin: 'com.github.johnrengelman.shadow'
}

Is this acceptable ? 



was (Author: bbejeck):
I opted to try the gradle shadow plugin first 
(https://github.com/johnrengelman/shadow/) 

I've been able to get JMH working as a sub-module but needed to make the 
following changes to the build.gradle script:

dependencies {  
  .
  // Added this entry for gradle shadow plugin
classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.3' //
  }

Then added this to apply plugin the jmh-benchmarks module:

project(':jmh-benchmarks') {
  apply plugin: 'com.github.johnrengelman.shadow'
}

Is this acceptable ? 


> Add JMH module for Benchmarks
> -
>
> Key: KAFKA-3989
> URL: https://issues.apache.org/jira/browse/KAFKA-3989
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>
> JMH is a Java harness for building, running, and analyzing benchmarks written 
> in Java or JVM languages.  To run properly JMH needs to be in it's own 
> module.   This task will also investigate using the jmh -gradle pluging 
> [https://github.com/melix/jmh-gradle-plugin] which enables the use of JMH 
> from gradle.  This is related to 
> [https://issues.apache.org/jira/browse/KAFKA-3973]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3989) Add JMH module for Benchmarks

2016-08-03 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15407054#comment-15407054
 ] 

Bill Bejeck edited comment on KAFKA-3989 at 8/4/16 3:03 AM:


I opted to try the gradle shadow plugin first 
(https://github.com/johnrengelman/shadow/) 

I've been able to get JMH working as a sub-module but needed to make the 
following changes to the build.gradle script:

dependencies {  
  .
  // Added this entry for gradle shadow plugin
classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.3' //
  }

Then added this to apply plugin the jmh-benchmarks module:

project(':jmh-benchmarks') {
  apply plugin: 'com.github.johnrengelman.shadow'
}

Is this acceptable ? 



was (Author: bbejeck):
I opted to try the gradle shadow plugin first 
(https://github.com/johnrengelman/shadow/) 

I've been able to get JMH working as a sub-module but needed to make the 
following changes to the build.gradle script:

`dependencies {  
  .
  // Added this entry for gradle shadow plugin
classpath 'com.github.jengelman.gradle.plugins:shadow:1.2.3' //
  }
`
Then added this to apply plugin the jmh-benchmarks module:

project(':jmh-benchmarks') {
  apply plugin: 'com.github.johnrengelman.shadow'
}

Is this acceptable ? 


> Add JMH module for Benchmarks
> -
>
> Key: KAFKA-3989
> URL: https://issues.apache.org/jira/browse/KAFKA-3989
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>
> JMH is a Java harness for building, running, and analyzing benchmarks written 
> in Java or JVM languages.  To run properly JMH needs to be in it's own 
> module.   This task will also investigate using the jmh -gradle pluging 
> [https://github.com/melix/jmh-gradle-plugin] which enables the use of JMH 
> from gradle.  This is related to 
> [https://issues.apache.org/jira/browse/KAFKA-3973]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3989) Add JMH module for Benchmarks

2016-08-04 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-3989:
---
Fix Version/s: 0.10.1.0

> Add JMH module for Benchmarks
> -
>
> Key: KAFKA-3989
> URL: https://issues.apache.org/jira/browse/KAFKA-3989
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
>
> JMH is a Java harness for building, running, and analyzing benchmarks written 
> in Java or JVM languages.  To run properly JMH needs to be in it's own 
> module.   This task will also investigate using the jmh -gradle pluging 
> [https://github.com/melix/jmh-gradle-plugin] which enables the use of JMH 
> from gradle.  This is related to 
> [https://issues.apache.org/jira/browse/KAFKA-3973]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3989) Add JMH module for Benchmarks

2016-08-04 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15408276#comment-15408276
 ] 

Bill Bejeck commented on KAFKA-3989:


Yes, the classpath entry is in the 'buildscript section.

To me, the advantage of the shadow plugin is simplicity, as key to working with 
JMH is just building the jar file to run the benchmarks.  Seemed to me like 
some of the features in jmh-gradle aren't really needed. 

I am adding a task so you can run a single benchmark from the command line (an 
approach I'm borrowing from the JMH examples), as that's how I envision most 
people using the module.  Maybe that's an incorrect assumption on my part.

Having said that, I'm not commited to a particular solution so I can put in 
jmh-gradle if desired, 

> Add JMH module for Benchmarks
> -
>
> Key: KAFKA-3989
> URL: https://issues.apache.org/jira/browse/KAFKA-3989
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
>
> JMH is a Java harness for building, running, and analyzing benchmarks written 
> in Java or JVM languages.  To run properly JMH needs to be in it's own 
> module.   This task will also investigate using the jmh -gradle pluging 
> [https://github.com/melix/jmh-gradle-plugin] which enables the use of JMH 
> from gradle.  This is related to 
> [https://issues.apache.org/jira/browse/KAFKA-3973]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-08-08 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411821#comment-15411821
 ] 

Bill Bejeck commented on KAFKA-3973:


I used JMH to benchmark the performance of caching bytes vs object (tracking by 
memory size using jamm) here are the results:


Result "testCacheBySizeBytes":
 2157013.372 ±(99.9%) 198793.816 ops/s [Average]
 (min, avg, max) = (687952.309, 2157013.372, 2485954.624), stdev = 353355.834
 CI (99.9%): [1958219.556, 2355807.189] (assumes normal distribution)


# Run complete. Total time: 00:02:41

Benchmark   
Mode  Cnt  ScoreError Units
MemoryBytesCacheBenchmark.testCacheByMemory thrpt   40290142.181 ±   
3001.345  ops/s
MemoryBytesCacheBenchmark.testCacheBySizeBytes  thrpt   40  2157013.372 ±  
198793.816   ops/s

Using JMH it still appears that serialization has the advantage.  
The test used for benchmarking will be included in the PR for KAFKA-3989 
(coming soon).

> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: https://issues.apache.org/jira/browse/KAFKA-3973
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
> Attachments: CachingPerformanceBenchmarks.java, MemoryLRUCache.java
>
>
> Currently the cache stores and accounts for records, not bytes or objects. 
> This investigation would be around measuring any performance overheads that 
> come from storing bytes or objects. As an outcome we should know whether 1) 
> we should store bytes or 2) we should store objects. 
> If we store objects, the cache still needs to know their size (so that it can 
> know if the object fits in the allocated cache space, e.g., if the cache is 
> 100MB and the object is 10MB, we'd have space for 10 such objects). The 
> investigation needs to figure out how to find out the size of the object 
> efficiently in Java.
> If we store bytes, then we are serialising an object into bytes before 
> caching it, i.e., we take a serialisation cost. The investigation needs 
> measure how bad this cost can be especially for the case when all objects fit 
> in cache (and thus any extra serialisation cost would show).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-08-08 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411821#comment-15411821
 ] 

Bill Bejeck edited comment on KAFKA-3973 at 8/8/16 3:34 PM:


I used JMH to benchmark the performance of caching bytes vs object (tracking by 
memory size using jamm) here are the results:

EDIT: New results from updated test
# Run complete. Total time: 00:02:41

Benchmark   
Mode  CntScoreError Units
MemoryBytesCacheBenchmark.testCacheByMemory thrpt   40   536694.504 ±   
4177.019  ops/s
MemoryBytesCacheBenchmark.testCacheBySizeBytes  thrpt   40  4713360.286 ± 
60874.723  ops/s 


Using JMH it still appears that serialization has the advantage.  
The test used for benchmarking will be included in the PR for KAFKA-3989 
(coming soon).


was (Author: bbejeck):
I used JMH to benchmark the performance of caching bytes vs object (tracking by 
memory size using jamm) here are the results:


Result "testCacheBySizeBytes":
 2157013.372 ±(99.9%) 198793.816 ops/s [Average]
 (min, avg, max) = (687952.309, 2157013.372, 2485954.624), stdev = 353355.834
 CI (99.9%): [1958219.556, 2355807.189] (assumes normal distribution)


# Run complete. Total time: 00:02:41

Benchmark   
Mode  Cnt  ScoreError Units
MemoryBytesCacheBenchmark.testCacheByMemory thrpt   40290142.181 ±   
3001.345  ops/s
MemoryBytesCacheBenchmark.testCacheBySizeBytes  thrpt   40  2157013.372 ±  
198793.816   ops/s

Using JMH it still appears that serialization has the advantage.  
The test used for benchmarking will be included in the PR for KAFKA-3989 
(coming soon).

> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: https://issues.apache.org/jira/browse/KAFKA-3973
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
> Attachments: CachingPerformanceBenchmarks.java, MemoryLRUCache.java
>
>
> Currently the cache stores and accounts for records, not bytes or objects. 
> This investigation would be around measuring any performance overheads that 
> come from storing bytes or objects. As an outcome we should know whether 1) 
> we should store bytes or 2) we should store objects. 
> If we store objects, the cache still needs to know their size (so that it can 
> know if the object fits in the allocated cache space, e.g., if the cache is 
> 100MB and the object is 10MB, we'd have space for 10 such objects). The 
> investigation needs to figure out how to find out the size of the object 
> efficiently in Java.
> If we store bytes, then we are serialising an object into bytes before 
> caching it, i.e., we take a serialisation cost. The investigation needs 
> measure how bad this cost can be especially for the case when all objects fit 
> in cache (and thus any extra serialisation cost would show).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-08-08 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411821#comment-15411821
 ] 

Bill Bejeck edited comment on KAFKA-3973 at 8/8/16 4:45 PM:


I used JMH to benchmark the performance of caching bytes vs object (tracking by 
memory size using jamm) here are the results:

EDIT: Needed to refactor tests, and use Bytes to wrap byte array for keys in 
cache

Run complete. Total time: 00:02:42

Benchmark   
Mode  CntScoreErrorUnits
MemoryBytesCacheBenchmark.testCacheByMemory thrpt   40251002.444  ± 
20683.129   ops/s
MemoryBytesCacheBenchmark.testCacheBySizeBytes  thrpt   40  1477170.674  ± 
12772.196   ops/s


After refactoring the JMH test the gap between tracking by memory and 
serialization has close, but it still appears that serialization has the 
advantage.  
The test used for benchmarking will be included in the PR for KAFKA-3989 
(coming soon).


was (Author: bbejeck):
I used JMH to benchmark the performance of caching bytes vs object (tracking by 
memory size using jamm) here are the results:

EDIT: New results from updated test
# Run complete. Total time: 00:02:41

Benchmark   
Mode  CntScoreError Units
MemoryBytesCacheBenchmark.testCacheByMemory thrpt   40   536694.504 ±   
4177.019  ops/s
MemoryBytesCacheBenchmark.testCacheBySizeBytes  thrpt   40  4713360.286 ± 
60874.723  ops/s 


Using JMH it still appears that serialization has the advantage.  
The test used for benchmarking will be included in the PR for KAFKA-3989 
(coming soon).

> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: https://issues.apache.org/jira/browse/KAFKA-3973
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
> Attachments: CachingPerformanceBenchmarks.java, MemoryLRUCache.java
>
>
> Currently the cache stores and accounts for records, not bytes or objects. 
> This investigation would be around measuring any performance overheads that 
> come from storing bytes or objects. As an outcome we should know whether 1) 
> we should store bytes or 2) we should store objects. 
> If we store objects, the cache still needs to know their size (so that it can 
> know if the object fits in the allocated cache space, e.g., if the cache is 
> 100MB and the object is 10MB, we'd have space for 10 such objects). The 
> investigation needs to figure out how to find out the size of the object 
> efficiently in Java.
> If we store bytes, then we are serialising an object into bytes before 
> caching it, i.e., we take a serialisation cost. The investigation needs 
> measure how bad this cost can be especially for the case when all objects fit 
> in cache (and thus any extra serialisation cost would show).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-08-08 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411821#comment-15411821
 ] 

Bill Bejeck edited comment on KAFKA-3973 at 8/8/16 4:55 PM:


I used JMH to benchmark the performance of caching bytes vs object (tracking by 
memory size using jamm) here are the results:

EDIT: Needed to refactor tests, and use Bytes to wrap byte array for keys in 
cache

Run complete. Total time: 00:02:42

Benchmark   
Mode  CntScoreErrorUnits
MemoryBytesCacheBenchmark.testCacheByMemory thrpt   40251002.444  ± 
20683.129   ops/s
MemoryBytesCacheBenchmark.testCacheBySizeBytes  thrpt   40  1477170.674  ± 
12772.196   ops/s


After refactoring the JMH test the gap between tracking by memory and 
serialization has closed some, but serialization still has the advantage.  
The test used for benchmarking will be included in the PR for KAFKA-3989 
(coming soon).


was (Author: bbejeck):
I used JMH to benchmark the performance of caching bytes vs object (tracking by 
memory size using jamm) here are the results:

EDIT: Needed to refactor tests, and use Bytes to wrap byte array for keys in 
cache

Run complete. Total time: 00:02:42

Benchmark   
Mode  CntScoreErrorUnits
MemoryBytesCacheBenchmark.testCacheByMemory thrpt   40251002.444  ± 
20683.129   ops/s
MemoryBytesCacheBenchmark.testCacheBySizeBytes  thrpt   40  1477170.674  ± 
12772.196   ops/s


After refactoring the JMH test the gap between tracking by memory and 
serialization has close, but it still appears that serialization has the 
advantage.  
The test used for benchmarking will be included in the PR for KAFKA-3989 
(coming soon).

> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: https://issues.apache.org/jira/browse/KAFKA-3973
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
> Attachments: CachingPerformanceBenchmarks.java, MemoryLRUCache.java
>
>
> Currently the cache stores and accounts for records, not bytes or objects. 
> This investigation would be around measuring any performance overheads that 
> come from storing bytes or objects. As an outcome we should know whether 1) 
> we should store bytes or 2) we should store objects. 
> If we store objects, the cache still needs to know their size (so that it can 
> know if the object fits in the allocated cache space, e.g., if the cache is 
> 100MB and the object is 10MB, we'd have space for 10 such objects). The 
> investigation needs to figure out how to find out the size of the object 
> efficiently in Java.
> If we store bytes, then we are serialising an object into bytes before 
> caching it, i.e., we take a serialisation cost. The investigation needs 
> measure how bad this cost can be especially for the case when all objects fit 
> in cache (and thus any extra serialisation cost would show).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3989) Add JMH module for Benchmarks

2016-08-08 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-3989:
---
Status: Patch Available  (was: In Progress)

> Add JMH module for Benchmarks
> -
>
> Key: KAFKA-3989
> URL: https://issues.apache.org/jira/browse/KAFKA-3989
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
>
> JMH is a Java harness for building, running, and analyzing benchmarks written 
> in Java or JVM languages.  To run properly JMH needs to be in it's own 
> module.   This task will also investigate using the jmh -gradle pluging 
> [https://github.com/melix/jmh-gradle-plugin] which enables the use of JMH 
> from gradle.  This is related to 
> [https://issues.apache.org/jira/browse/KAFKA-3973]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3989) Add JMH module for Benchmarks

2016-08-08 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412231#comment-15412231
 ] 

Bill Bejeck commented on KAFKA-3989:


[~ijuma]

After your comment, I tried to implement the jmh-gradle-plugin but I got this 
error:

Error:Could not find me.champeau.gradle:jmh-gradle-plugin:0.3.0.
Searched in the following locations:

https://repo1.maven.org/maven2/me/champeau/gradle/jmh-gradle-plugin/0.3.0/jmh-gradle-plugin-0.3.0.pom

https://repo1.maven.org/maven2/me/champeau/gradle/jmh-gradle-plugin/0.3.0/jmh-gradle-plugin-0.3.0.jar

https://jcenter.bintray.com/me/champeau/gradle/jmh-gradle-plugin/0.3.0/jmh-gradle-plugin-0.3.0.pom

https://jcenter.bintray.com/me/champeau/gradle/jmh-gradle-plugin/0.3.0/jmh-gradle-plugin-0.3.0.jar

http://dl.bintray.com/content/netflixoss/external-gradle-plugins/me/champeau/gradle/jmh-gradle-plugin/0.3.0/jmh-gradle-plugin-0.3.0.pom

http://dl.bintray.com/content/netflixoss/external-gradle-plugins/me/champeau/gradle/jmh-gradle-plugin/0.3.0/jmh-gradle-plugin-0.3.0.jar

I'm sure this is could be a simple configuration error, but I didn't spend any 
time tracking it down.  

FWIW I have somewhat limited experience with gradle and I took it as an 
opportunity to learn a little more by continuing to use the gradle-shade plugin

> Add JMH module for Benchmarks
> -
>
> Key: KAFKA-3989
> URL: https://issues.apache.org/jira/browse/KAFKA-3989
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
>
> JMH is a Java harness for building, running, and analyzing benchmarks written 
> in Java or JVM languages.  To run properly JMH needs to be in it's own 
> module.   This task will also investigate using the jmh -gradle pluging 
> [https://github.com/melix/jmh-gradle-plugin] which enables the use of JMH 
> from gradle.  This is related to 
> [https://issues.apache.org/jira/browse/KAFKA-3973]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-08-13 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-3973:
---
Attachment: (was: CachingPerformanceBenchmarks.java)

> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: https://issues.apache.org/jira/browse/KAFKA-3973
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
>
> Currently the cache stores and accounts for records, not bytes or objects. 
> This investigation would be around measuring any performance overheads that 
> come from storing bytes or objects. As an outcome we should know whether 1) 
> we should store bytes or 2) we should store objects. 
> If we store objects, the cache still needs to know their size (so that it can 
> know if the object fits in the allocated cache space, e.g., if the cache is 
> 100MB and the object is 10MB, we'd have space for 10 such objects). The 
> investigation needs to figure out how to find out the size of the object 
> efficiently in Java.
> If we store bytes, then we are serialising an object into bytes before 
> caching it, i.e., we take a serialisation cost. The investigation needs 
> measure how bad this cost can be especially for the case when all objects fit 
> in cache (and thus any extra serialisation cost would show).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-08-13 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-3973:
---
Attachment: (was: MemoryLRUCache.java)

> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: https://issues.apache.org/jira/browse/KAFKA-3973
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
>
> Currently the cache stores and accounts for records, not bytes or objects. 
> This investigation would be around measuring any performance overheads that 
> come from storing bytes or objects. As an outcome we should know whether 1) 
> we should store bytes or 2) we should store objects. 
> If we store objects, the cache still needs to know their size (so that it can 
> know if the object fits in the allocated cache space, e.g., if the cache is 
> 100MB and the object is 10MB, we'd have space for 10 such objects). The 
> investigation needs to figure out how to find out the size of the object 
> efficiently in Java.
> If we store bytes, then we are serialising an object into bytes before 
> caching it, i.e., we take a serialisation cost. The investigation needs 
> measure how bad this cost can be especially for the case when all objects fit 
> in cache (and thus any extra serialisation cost would show).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-08-13 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck updated KAFKA-3973:
---
Attachment: MemBytesBenchmark.txt

> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: https://issues.apache.org/jira/browse/KAFKA-3973
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
> Attachments: MemBytesBenchmark.txt
>
>
> Currently the cache stores and accounts for records, not bytes or objects. 
> This investigation would be around measuring any performance overheads that 
> come from storing bytes or objects. As an outcome we should know whether 1) 
> we should store bytes or 2) we should store objects. 
> If we store objects, the cache still needs to know their size (so that it can 
> know if the object fits in the allocated cache space, e.g., if the cache is 
> 100MB and the object is 10MB, we'd have space for 10 such objects). The 
> investigation needs to figure out how to find out the size of the object 
> efficiently in Java.
> If we store bytes, then we are serialising an object into bytes before 
> caching it, i.e., we take a serialisation cost. The investigation needs 
> measure how bad this cost can be especially for the case when all objects fit 
> in cache (and thus any extra serialisation cost would show).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3973) Investigate feasibility of caching bytes vs. records

2016-08-13 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15420061#comment-15420061
 ] 

Bill Bejeck commented on KAFKA-3973:


Attaching JMH benchmark results, removing the previous hand-rolled benchmarking 
code.  The JMH benchmark code not attached as it is part of PR #1712 

> Investigate feasibility of caching bytes vs. records
> 
>
> Key: KAFKA-3973
> URL: https://issues.apache.org/jira/browse/KAFKA-3973
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Eno Thereska
>Assignee: Bill Bejeck
> Fix For: 0.10.1.0
>
> Attachments: MemBytesBenchmark.txt
>
>
> Currently the cache stores and accounts for records, not bytes or objects. 
> This investigation would be around measuring any performance overheads that 
> come from storing bytes or objects. As an outcome we should know whether 1) 
> we should store bytes or 2) we should store objects. 
> If we store objects, the cache still needs to know their size (so that it can 
> know if the object fits in the allocated cache space, e.g., if the cache is 
> 100MB and the object is 10MB, we'd have space for 10 such objects). The 
> investigation needs to figure out how to find out the size of the object 
> efficiently in Java.
> If we store bytes, then we are serialising an object into bytes before 
> caching it, i.e., we take a serialisation cost. The investigation needs 
> measure how bad this cost can be especially for the case when all objects fit 
> in cache (and thus any extra serialisation cost would show).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-4023) Add thread id as prefix in Kafka Streams thread logging

2016-08-16 Thread Bill Bejeck (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reassigned KAFKA-4023:
--

Assignee: Bill Bejeck

> Add thread id as prefix in Kafka Streams thread logging
> ---
>
> Key: KAFKA-4023
> URL: https://issues.apache.org/jira/browse/KAFKA-4023
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Bill Bejeck
>  Labels: newbie++
>
> A single Kafka Streams instance can include multiple stream threads, and 
> hence without logging prefix it is difficult to determine which thread's 
> producing which log entries.
> We should
> 1) add the log-prefix as thread id in StreamThread logger, as well as its 
> contained StreamPartitionAssignor.
> 2) add the log-prefix as task id in StreamTask / StandbyTask, as well as its 
> contained RecordCollector and ProcessorStateManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4023) Add thread id as prefix in Kafka Streams thread logging

2016-08-16 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422890#comment-15422890
 ] 

Bill Bejeck commented on KAFKA-4023:


Picking this one up.  Just let me know if someone is currently working this and 
I'll unassign myself.

> Add thread id as prefix in Kafka Streams thread logging
> ---
>
> Key: KAFKA-4023
> URL: https://issues.apache.org/jira/browse/KAFKA-4023
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Bill Bejeck
>  Labels: newbie++
>
> A single Kafka Streams instance can include multiple stream threads, and 
> hence without logging prefix it is difficult to determine which thread's 
> producing which log entries.
> We should
> 1) add the log-prefix as thread id in StreamThread logger, as well as its 
> contained StreamPartitionAssignor.
> 2) add the log-prefix as task id in StreamTask / StandbyTask, as well as its 
> contained RecordCollector and ProcessorStateManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3478) Finer Stream Flow Control

2016-08-16 Thread Bill Bejeck (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15422907#comment-15422907
 ] 

Bill Bejeck commented on KAFKA-3478:


Is this task still available and is it a feature that is still in the current 
plan/desired to get done?

> Finer Stream Flow Control
> -
>
> Key: KAFKA-3478
> URL: https://issues.apache.org/jira/browse/KAFKA-3478
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: user-experience
> Fix For: 0.10.1.0
>
>
> Today we have a event-time based flow control mechanism in order to 
> synchronize multiple input streams in a best effort manner:
> http://docs.confluent.io/3.0.0/streams/architecture.html#flow-control-with-timestamps
> However, there are some use cases where users would like to have finer 
> control of the input streams, for example, with two input streams, one of 
> them always reading from offset 0 upon (re)-starting, and the other reading 
> for log end offset.
> Today we only have one consumer config "offset.auto.reset" to control that 
> behavior, which means all streams are read either from "earliest" or "latest".
> We should consider how to improve this settings to allow users have finer 
> control over these frameworks.
> =
> A finer flow control could also be used to allow for populating a {{KTable}} 
> (with an "initial" state) before starting the actual processing (this feature 
> was ask for in the mailing list multiple times already). Even if it is quite 
> hard to define, *when* the initial populating phase should end, this might 
> still be useful. There would be the following possibilities:
>  1) an initial fixed time period for populating
>(it might be hard for a user to estimate the correct value)
>  2) an "idle" period, ie, if no update to a KTable for a certain time is
> done, we consider it as populated
>  3) a timestamp cut off point, ie, all records with an older timestamp
> belong to the initial populating phase
>  4) a throughput threshold, ie, if the populating frequency falls below
> the threshold, the KTable is considered "finished"
>  5) maybe something else ??
> The API might look something like this
> {noformat}
> KTable table = builder.table("topic", 1000); // populate the table without 
> reading any other topics until see one record with timestamp 1000.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-8331) Add system test for enabling static membership on KStream

2019-06-07 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8331.

Resolution: Fixed

> Add system test for enabling static membership on KStream
> -
>
> Key: KAFKA-8331
> URL: https://issues.apache.org/jira/browse/KAFKA-8331
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Major
> Fix For: 2.4.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-8501) Remove key and value from exception message

2019-06-11 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8501.

Resolution: Fixed

> Remove key and value from exception message
> ---
>
> Key: KAFKA-8501
> URL: https://issues.apache.org/jira/browse/KAFKA-8501
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Badai Aqrandista
>Assignee: Carlos Manuel Duclos Vergara
>Priority: Major
>  Labels: easy-fix, newbie
> Fix For: 2.4.0
>
>
> KAFKA-7510 moves the WARN messages that contain key and value to TRACE level. 
> But the exceptions still contain key and value. These are the two in 
> RecordCollectorImpl:
>  
> [https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java#L185-L196]
>  
> [https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/RecordCollectorImpl.java#L243-L254]
>  
> Can these be modified as well to remove key and value from the error message, 
> which we don't know what log level it will be printed in?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6874) Add Configuration Allowing for Optional Topology Optimization

2019-06-14 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-6874.

   Resolution: Duplicate
Fix Version/s: 2.0.0

Duplicate of https://issues.apache.org/jira/browse/KAFKA-6935

Resolved with PR [https://github.com/apache/kafka/pull/5071]

> Add Configuration Allowing for Optional Topology Optimization 
> --
>
> Key: KAFKA-6874
> URL: https://issues.apache.org/jira/browse/KAFKA-6874
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
> Fix For: 2.0.0
>
>
> With the release of 2.0 Streams will introduce topology optimization.  We 
> should provide a config with a default value of false allowing users to 
> enable/disable optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8558) KIP-479 - Add Materialized Overload to KStream#Join

2019-06-18 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-8558:
--

 Summary: KIP-479 - Add Materialized Overload to KStream#Join 
 Key: KAFKA-8558
 URL: https://issues.apache.org/jira/browse/KAFKA-8558
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bill Bejeck
Assignee: Bill Bejeck
 Fix For: 2.4.0


To prevent a topology incompatibility with the release of 2.4 and the naming of 
Join operations we'll add an overloaded KStream#join method accepting a 
Materialized parameter. The overloads will apply to all flavors of KStream#join 
(inner, left, and right). 

Additionally, new methods withQueryingEnabled and withQueryingDisabled are 
going to be added to Materialized



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-8615) Change to track partition time breaks TimestampExtractor

2019-06-28 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-8615:
--

 Summary: Change to track partition time breaks TimestampExtractor
 Key: KAFKA-8615
 URL: https://issues.apache.org/jira/browse/KAFKA-8615
 Project: Kafka
  Issue Type: Bug
Affects Versions: 2.3.0
Reporter: Bill Bejeck
Assignee: Bill Bejeck


>From the users mailing list
{noformat}
am testing the new version 2.3 for Kafka Streams specifically. I have
noticed that now, the implementation of the method extract from the
interface org.apache.kafka.streams.processor.TimestampExtractor

*public* *long* extract(ConsumerRecord record, *long*
previousTimestamp)


is always returning -1 as value.


Previous version 2.2.1 was returning the correct value for the record
partition.
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-5998) /.checkpoint.tmp Not found exception

2019-07-12 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-5998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-5998.

Resolution: Fixed

> /.checkpoint.tmp Not found exception
> 
>
> Key: KAFKA-5998
> URL: https://issues.apache.org/jira/browse/KAFKA-5998
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.11.0.0, 0.11.0.1, 2.1.1
>Reporter: Yogesh BG
>Assignee: John Roesler
>Priority: Critical
> Attachments: 5998.v1.txt, 5998.v2.txt, Kafka5998.zip, Topology.txt, 
> exc.txt, props.txt, streams.txt
>
>
> I have one kafka broker and one kafka stream running... I am running its 
> since two days under load of around 2500 msgs per second.. On third day am 
> getting below exception for some of the partitions, I have 16 partitions only 
> 0_0 and 0_1 gives this error
> {{09:43:25.955 [ks_0_inst-StreamThread-6] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_1/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:201)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:260)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask.commit(StreamTask.java:254)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks$1.apply(AssignedTasks.java:322)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.applyToRunningTasks(AssignedTasks.java:415)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.AssignedTasks.commit(AssignedTasks.java:314)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.commitAll(StreamThread.java:700)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeCommit(StreamThread.java:683)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:523)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:480)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:457)
>  [rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> 09:43:25.974 [ks_0_inst-StreamThread-15] WARN  
> o.a.k.s.p.i.ProcessorStateManager - Failed to write checkpoint file to 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint:
> java.io.FileNotFoundException: 
> /data/kstreams/rtp-kafkastreams/0_0/.checkpoint.tmp (No such file or 
> directory)
> at java.io.FileOutputStream.open(Native Method) ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:221) 
> ~[na:1.7.0_111]
> at java.io.FileOutputStream.(FileOutputStream.java:171) 
> ~[na:1.7.0_111]
> at 
> org.apache.kafka.streams.state.internals.OffsetCheckpoint.write(OffsetCheckpoint.java:73)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.checkpoint(ProcessorStateManager.java:324)
>  ~[rtp-kafkastreams-1.0-SNAPSHOT-jar-with-dependencies.jar:na]
> at 
> org.apache.kafka.streams.processor.internals.StreamTask$1.run(StreamTask.java:267)
>  [rtp-kafkastreams-1.

[jira] [Resolved] (KAFKA-8198) KStreams testing docs use non-existent method "pipe"

2019-07-12 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8198.

Resolution: Fixed

> KStreams testing docs use non-existent method "pipe"
> 
>
> Key: KAFKA-8198
> URL: https://issues.apache.org/jira/browse/KAFKA-8198
> Project: Kafka
>  Issue Type: Bug
>  Components: documentation, streams
>Affects Versions: 1.1.1, 2.0.1, 2.2.0, 2.1.1
>Reporter: Michael Drogalis
>Assignee: Slim Ouertani
>Priority: Minor
>  Labels: documentation, newbie
>
> In [the testing docs for 
> KStreams|https://kafka.apache.org/20/documentation/streams/developer-guide/testing.html],
>  we use the following code snippet:
> {code:java}
> ConsumerRecordFactory factory = new 
> ConsumerRecordFactory<>("input-topic", new StringSerializer(), new 
> IntegerSerializer());
> testDriver.pipe(factory.create("key", 42L));
> {code}
> We should correct the docs to use the pipeInput method.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (KAFKA-8666) Improve Documentation on usage of Materialized config object

2019-07-15 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-8666:
--

 Summary: Improve Documentation on usage of Materialized config 
object
 Key: KAFKA-8666
 URL: https://issues.apache.org/jira/browse/KAFKA-8666
 Project: Kafka
  Issue Type: Improvement
  Components: documentation, streams
Reporter: Bill Bejeck


When using the Materialized object if the user wants to name the statestore with
{code:java}
Materialized.as("MyStoreName"){code}
then subsequently provide the key and value serde the calls to do so must take 
the form of 
{code:java}
Materialized.as("MyStoreName").withKeySerde(keySerde).withValueSerde(valueSerde)
{code}
If users do the following 
{code:java}
Materialized.as("MyStoreName").with(keySerde, valueSerde)
{code}
the Materialized instance created by the "as(storeName)" call is replaced by a 
new Materialized instance resulting from the "with(...)" call and any 
configuration on the first Materialized instance is lost.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-8602) StreamThread Dies Because Restore Consumer is not Subscribed to Any Topic

2019-07-15 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8602.

Resolution: Fixed

> StreamThread Dies Because Restore Consumer is not Subscribed to Any Topic
> -
>
> Key: KAFKA-8602
> URL: https://issues.apache.org/jira/browse/KAFKA-8602
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.1.1
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Critical
>
> StreamThread dies with the following exception:
> {code:java}
> java.lang.IllegalStateException: Consumer is not subscribed to any topics or 
> assigned any partitions
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1206)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1199)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.maybeUpdateStandbyTasks(StreamThread.java:1126)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:923)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:805)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:774)
> {code}
> The reason is that the restore consumer is not subscribed to any topic. This 
> happens when a StreamThread gets assigned standby tasks for sub-topologies 
> with just state stores with disabled logging.
> To reproduce the bug start two applications with one StreamThread and one 
> standby replica each and the following topology. The input topic should have 
> two partitions:
> {code:java}
> final StreamsBuilder builder = new StreamsBuilder();
> final String stateStoreName = "myTransformState";
> final StoreBuilder> keyValueStoreBuilder =
> 
> Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore(stateStoreName),
> Serdes.Integer(),
> Serdes.Integer())
> .withLoggingDisabled();
> builder.addStateStore(keyValueStoreBuilder);
> builder.stream(INPUT_TOPIC, Consumed.with(Serdes.Integer(), Serdes.Integer()))
> .transform(() -> new Transformer Integer>>() {
> private KeyValueStore state;
> @SuppressWarnings("unchecked")
> @Override
> public void init(final ProcessorContext context) {
> state = (KeyValueStore) 
> context.getStateStore(stateStoreName);
> }
> @Override
> public KeyValue transform(final Integer key, 
> final Integer value) {
> final KeyValue result = new KeyValue<>(key, 
> value);
> return result;
> }
> @Override
> public void close() {}
> }, stateStoreName)
> .to(OUTPUT_TOPIC);
> {code}
> Both StreamThreads should die with the above exception.
> The root cause is that standby tasks are created although all state stores of 
> the sub-topology have logging disabled.  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-8689) Cannot Name Join State Store Topics

2019-07-19 Thread Bill Bejeck (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8689.

Resolution: Duplicate

Duplicate of https://issues.apache.org/jira/browse/KAFKA-8558

> Cannot Name Join State Store Topics
> ---
>
> Key: KAFKA-8689
> URL: https://issues.apache.org/jira/browse/KAFKA-8689
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Simon Dean
>Priority: Major
>
> Performing a join on two KStreams, produces two state store topics.  
> Currently the names state store topics are auto generated and cannot be 
> overridden. 
> Example code:
>  
> {code:java}
> import org.apache.kafka.clients.producer.KafkaProducer;
> import org.apache.kafka.clients.producer.ProducerConfig;
> import org.apache.kafka.clients.producer.ProducerRecord;
> import org.apache.kafka.common.serialization.LongSerializer;
> import org.apache.kafka.common.serialization.Serde;
> import org.apache.kafka.common.serialization.Serdes;
> import org.apache.kafka.common.serialization.StringSerializer;
> import org.apache.kafka.streams.KafkaStreams;
> import org.apache.kafka.streams.StreamsBuilder;
> import org.apache.kafka.streams.StreamsConfig;
> import org.apache.kafka.streams.kstream.Consumed;
> import org.apache.kafka.streams.kstream.JoinWindows;
> import org.apache.kafka.streams.kstream.Joined;
> import org.apache.kafka.streams.kstream.KStream;
> import java.time.Duration;
> import java.util.HashMap;
> import java.util.Map;
> import java.util.Properties;
> import java.util.concurrent.TimeUnit;
> public class JoinTopicNamesExample {
> public static void main(final String[] args) throws InterruptedException {
> new Thread(() -> {
> produce(args);
> }).run();
> new Thread(() -> {
> try {
> streams(args);
> } catch (InterruptedException e) {
> e.printStackTrace();
> }
> }).run();
> }
> private static void produce(String[] args) {
> Map props = new HashMap<>();
> props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
> props.put(ProducerConfig.RETRIES_CONFIG, 0);
> props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
> props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
> props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
> props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
> StringSerializer.class);
> props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
> LongSerializer.class);
> KafkaProducer producer = new KafkaProducer<>(props);
> for (long i = 0; i < 10; i++) {
> producer.send(new ProducerRecord("left", Long.toString(i), i));
> }
> for (long i = 0; i < 10; i++) {
> producer.send(new ProducerRecord("right", Long.toString(i), i));
> }
> }
> private static void streams(String[] args) throws InterruptedException {
> final String bootstrapServers = args.length > 0 ? args[0] : 
> "localhost:9092";
> final Properties streamsConfiguration = new Properties();
> // Give the Streams application a unique name.  The name must be 
> unique in the Kafka cluster
> // against which the application is run.
> streamsConfiguration.put(StreamsConfig.APPLICATION_ID_CONFIG, 
> "join-topic-names-example");
> streamsConfiguration.put(StreamsConfig.CLIENT_ID_CONFIG, 
> "user-region-lambda-example-client");
> // Where to find Kafka broker(s).
> streamsConfiguration.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, 
> bootstrapServers);
> // Specify default (de)serializers for record keys and for record 
> values.
> 
> streamsConfiguration.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, 
> Serdes.String().getClass().getName());
> 
> streamsConfiguration.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, 
> Serdes.Long().getClass().getName());
> // Records should be flushed every 10 seconds. This is less than the 
> default
> // in order to keep this example interactive.
> streamsConfiguration.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 10 
> * 1000);
> final Serde stringSerde = Serdes.String();
> final Serde longSerde = Serdes.Long();
> final StreamsBuilder builder = new StreamsBuilder();
> final KStream left = builder.stream("left", 
> Consumed.with(stringSerde, longSerde));
> final KStream right = builder.stream("right", 
> Consumed.with(stringSerde, longSerde));
> left.join(
> right,
> (value1, value2) -> value1 + value2,
> JoinWindows.of(Duration.ofHours(1)), 
> Joined.as("sum"));
> 

[jira] [Created] (KAFKA-8692) Transient failure in kafka.api.SaslScramSslEndToEndAuthorizationTest.testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl

2019-07-19 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-8692:
--

 Summary: Transient failure in 
kafka.api.SaslScramSslEndToEndAuthorizationTest.testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl
 Key: KAFKA-8692
 URL: https://issues.apache.org/jira/browse/KAFKA-8692
 Project: Kafka
  Issue Type: Bug
  Components: core, unit tests
Reporter: Bill Bejeck


Failed in build [https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/420/]

 
{noformat}
Error Message
org.scalatest.exceptions.TestFailedException: Consumed 0 records before timeout 
instead of the expected 1 records


Stacktrace
org.scalatest.exceptions.TestFailedException: Consumed 0 records before timeout 
instead of the expected 1 records
at 
org.scalatest.Assertions.newAssertionFailedException(Assertions.scala:530)
at 
org.scalatest.Assertions.newAssertionFailedException$(Assertions.scala:529)
at 
org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389)
at org.scalatest.Assertions.fail(Assertions.scala:1091)
at org.scalatest.Assertions.fail$(Assertions.scala:1087)
at org.scalatest.Assertions$.fail(Assertions.scala:1389)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:822)
at kafka.utils.TestUtils$.pollRecordsUntilTrue(TestUtils.scala:781)
at 
kafka.utils.TestUtils$.pollUntilAtLeastNumRecords(TestUtils.scala:1309)
at kafka.utils.TestUtils$.consumeRecords(TestUtils.scala:1317)
at 
kafka.api.EndToEndAuthorizationTest.consumeRecords(EndToEndAuthorizationTest.scala:522)
at 
kafka.api.EndToEndAuthorizationTest.testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl(EndToEndAuthorizationTest.scala:361)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at jdk.internal.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.

[jira] [Created] (KAFKA-8744) Add Support to Scala API for KIP-307

2019-08-01 Thread Bill Bejeck (JIRA)
Bill Bejeck created KAFKA-8744:
--

 Summary: Add Support to Scala API for KIP-307
 Key: KAFKA-8744
 URL: https://issues.apache.org/jira/browse/KAFKA-8744
 Project: Kafka
  Issue Type: Task
  Components: streams
Affects Versions: 2.4.0
Reporter: Bill Bejeck
Assignee: Florian Hussonnois
 Fix For: 2.4.0


With the ability to provide names for all operators in a Kafka Streams topology 
([https://cwiki.apache.org/confluence/display/KAFKA/KIP-307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL])
 coming in the 2.4 release, we also need to add this new feature to the Streams 
Scala API.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (KAFKA-8878) Flaky Test AssignedStreamsTasksTest#shouldCloseCleanlyWithSuspendedTaskAndEOS

2019-09-10 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8878.

Resolution: Fixed

> Flaky Test AssignedStreamsTasksTest#shouldCloseCleanlyWithSuspendedTaskAndEOS
> -
>
> Key: KAFKA-8878
> URL: https://issues.apache.org/jira/browse/KAFKA-8878
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, unit tests
>Affects Versions: 2.4.0
>Reporter: Matthias J. Sax
>Assignee: Chris Pettitt
>Priority: Major
>  Labels: flaky-test
> Fix For: 2.4.0
>
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3887/tests]
> {quote}java.lang.AssertionError: Expected no ERROR message while closing 
> assignedTasks, but got 1. First error: [AdminClient clientId=adminclient-67] 
> Connection to node -1 (localhost/127.0.0.1:8080) could not be established. 
> Broker may not be available.. Cause: N/A
> at org.junit.Assert.fail(Assert.java:89)
> at 
> org.apache.kafka.streams.processor.internals.AssignedStreamsTasksTest.shouldCloseCleanlyWithSuspendedTaskAndEOS(AssignedStreamsTasksTest.java:555){quote}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (KAFKA-8859) Refactor Cache-level Streams Metrics

2019-09-23 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8859?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8859.

Resolution: Fixed

> Refactor Cache-level Streams Metrics
> 
>
> Key: KAFKA-8859
> URL: https://issues.apache.org/jira/browse/KAFKA-8859
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
> Fix For: 2.4.0
>
>
> Refactoring of cache-level Streams metrics according KIP-444.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6958) Allow to define custom processor names with KStreams DSL

2019-09-24 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-6958.

Resolution: Fixed

Thanks again [~fhussonnois] for your hard work and persistence in seeing this 
valuable contribution through to completion!

> Allow to define custom processor names with KStreams DSL
> 
>
> Key: KAFKA-6958
> URL: https://issues.apache.org/jira/browse/KAFKA-6958
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 1.1.0
>Reporter: Florian Hussonnois
>Assignee: Florian Hussonnois
>Priority: Minor
>  Labels: kip
> Fix For: 2.4.0
>
>
> Currently, while building a new Topology through the KStreams DSL the 
> processors are automatically named.
> The genarated names are prefixed depending of the operation (i.e 
> KSTREAM-SOURCE, KSTREAM-FILTER, KSTREAM-MAP, etc).
> To debug/understand a topology it is possible to display the processor 
> lineage with the method Topology#describe(). However, a complex topology with 
> dozens of operations can be hard to understand if the processor names are not 
> relevant.
> It would be useful to be able to set more meaningful names. For example, a 
> processor name could describe the business rule performed by a map() 
> operation.
> [KIP-307|https://cwiki.apache.org/confluence/display/KAFKA/KIP-307%3A+Allow+to+define+custom+processor+names+with+KStreams+DSL]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-8807) Flaky Test GlobalThreadShutDownOrderTest.shouldFinishGlobalStoreOperationOnShutDown

2019-09-28 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reopened KAFKA-8807:


Reopening this as I am going to refactor the test to check that close is only 
called once during shutdown.

> Flaky Test 
> GlobalThreadShutDownOrderTest.shouldFinishGlobalStoreOperationOnShutDown
> ---
>
> Key: KAFKA-8807
> URL: https://issues.apache.org/jira/browse/KAFKA-8807
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Bill Bejeck
>Priority: Major
>  Labels: flaky-test
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/24229/testReport/junit/org.apache.kafka.streams.integration/GlobalThreadShutDownOrderTest/shouldFinishGlobalStoreOperationOnShutDown/]
>  
> h3. Error Message
> java.lang.AssertionError: expected:<[1, 2, 3, 4]> but was:<[1, 2, 3, 4, 1, 2, 
> 3, 4]>
> h3. Stacktrace
> java.lang.AssertionError: expected:<[1, 2, 3, 4]> but was:<[1, 2, 3, 4, 1, 2, 
> 3, 4]> at org.junit.Assert.fail(Assert.java:89) at 
> org.junit.Assert.failNotEquals(Assert.java:835) at 
> org.junit.Assert.assertEquals(Assert.java:120) at 
> org.junit.Assert.assertEquals(Assert.java:146) at 
> org.apache.kafka.streams.integration.GlobalThreadShutDownOrderTest.shouldFinishGlobalStoreOperationOnShutDown(GlobalThreadShutDownOrderTest.java:138)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:65) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:412) at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
>  at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
>  at 
> org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
>  at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>  at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>  at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>  at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>  at com.sun.proxy.$Proxy2.processTestClass(Unknown Source) at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
>  at sun.reflect.NativeMethodAccessorImpl.invok

[jira] [Resolved] (KAFKA-3705) Support non-key joining in KTable

2019-10-03 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-3705.

Resolution: Fixed

> Support non-key joining in KTable
> -
>
> Key: KAFKA-3705
> URL: https://issues.apache.org/jira/browse/KAFKA-3705
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Adam Bellemare
>Priority: Major
>  Labels: api, kip
> Fix For: 2.4.0
>
>
> KIP-213: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable]
> Today in Kafka Streams DSL, KTable joins are only based on keys. If users 
> want to join a KTable A by key {{a}} with another KTable B by key {{b}} but 
> with a "foreign key" {{a}}, and assuming they are read from two topics which 
> are partitioned on {{a}} and {{b}} respectively, they need to do the 
> following pattern:
> {code:java}
> tableB' = tableB.groupBy(/* select on field "a" */).agg(...); // now tableB' 
> is partitioned on "a"
> tableA.join(tableB', joiner);
> {code}
> Even if these two tables are read from two topics which are already 
> partitioned on {{a}}, users still need to do the pre-aggregation in order to 
> make the two joining streams to be on the same key. This is a draw-back from 
> programability and we should fix it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9002) org.apache.kafka.streams.integration.RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenCreated

2019-10-08 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9002:
--

 Summary: 
org.apache.kafka.streams.integration.RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenCreated
 Key: KAFKA-9002
 URL: https://issues.apache.org/jira/browse/KAFKA-9002
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bill Bejeck


[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/25603/testReport/junit/org.apache.kafka.streams.integration/RegexSourceIntegrationTest/testRegexMatchesTopicsAWhenCreated/]
{noformat}
Error Messagejava.lang.AssertionError: Condition not met within timeout 15000. 
Stream tasks not updatedStacktracejava.lang.AssertionError: Condition not met 
within timeout 15000. Stream tasks not updated
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:376)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:336)
at 
org.apache.kafka.streams.integration.RegexSourceIntegrationTest.testRegexMatchesTopicsAWhenCreated(RegexSourceIntegrationTest.java:175)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 

[jira] [Resolved] (KAFKA-8944) Compiler Warning

2019-10-08 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8944.

Resolution: Fixed

> Compiler Warning
> 
>
> Key: KAFKA-8944
> URL: https://issues.apache.org/jira/browse/KAFKA-8944
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: huxihx
>Priority: Minor
>  Labels: scala
>
> When building Kafka Streams, we get the following compiler warning that we 
> should fix:
> {code:java}
> scala/src/main/scala/org/apache/kafka/streams/scala/kstream/KTable.scala:24: 
> imported `Suppressed' is permanently hidden by definition of object 
> Suppressed in package kstream import 
> org.apache.kafka.streams.kstream.{Suppressed, 
> ValueTransformerWithKeySupplier, KTable => KTableJ}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9006) Flaky Test KTableKTableForeignKeyJoinIntegrationTest.doLeftJoinFilterOutRapidlyChangingForeignKeyValues

2019-10-09 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9006:
--

 Summary: Flaky Test 
KTableKTableForeignKeyJoinIntegrationTest.doLeftJoinFilterOutRapidlyChangingForeignKeyValues
 Key: KAFKA-9006
 URL: https://issues.apache.org/jira/browse/KAFKA-9006
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bill Bejeck


h3.  
{noformat}
Error Message
array lengths differed, expected.length=2 actual.length=1; arrays first 
differed at element [0]; expected: but 
was:

Stacktrace
array lengths differed, expected.length=2 actual.length=1; arrays first 
differed at element [0]; expected: but 
was: at 
org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:78) 
at 
org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:28) 
at org.junit.Assert.internalArrayEquals(Assert.java:534) at 
org.junit.Assert.assertArrayEquals(Assert.java:285) at 
org.junit.Assert.assertArrayEquals(Assert.java:300) at 
org.apache.kafka.streams.integration.KTableKTableForeignKeyJoinIntegrationTest.doLeftJoinFilterOutRapidlyChangingForeignKeyValues(KTableKTableForeignKeyJoinIntegrationTest.java:585)
 at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method) at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
 at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365) at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:65) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292) at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:412) at 
org.junit.runners.Suite.runChild(Suite.java:128) at 
org.junit.runners.Suite.runChild(Suite.java:27) at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:330) at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78) at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328) at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:65) at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292) at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305) at 
org.junit.runners.ParentRunner.run(ParentRunner.java:412) at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
 at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
 at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
 at jdk.internal.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.base/java.lang.reflect.Method.invoke(Method.java:566) at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
 at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
 at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java

[jira] [Reopened] (KAFKA-8460) Flaky Test PlaintextConsumerTest#testLowMaxFetchSizeForRequestAndPartition

2019-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reopened KAFKA-8460:


Test failed again in 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/25644/testReport/junit/kafka.api/PlaintextConsumerTest/testLowMaxFetchSizeForRequestAndPartition/]
{noformat}
Error Messageorg.scalatest.exceptions.TestFailedException: Timed out before 
consuming expected 2700 records. The number consumed was 
1560.Stacktraceorg.scalatest.exceptions.TestFailedException: Timed out before 
consuming expected 2700 records. The number consumed was 1560.
at 
org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:530)
at 
org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389)
at org.scalatest.Assertions$class.fail(Assertions.scala:1091)
at org.scalatest.Assertions$.fail(Assertions.scala:1389)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:841)
at kafka.utils.TestUtils$.pollRecordsUntilTrue(TestUtils.scala:792)
at 
kafka.api.AbstractConsumerTest.consumeRecords(AbstractConsumerTest.scala:158)
at 
kafka.api.PlaintextConsumerTest.testLowMaxFetchSizeForRequestAndPartition(PlaintextConsumerTest.scala:802)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.l

[jira] [Created] (KAFKA-9007) Flaky Test kafka.api.SaslPlaintextConsumerTest.testCoordinatorFailover

2019-10-09 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9007:
--

 Summary: Flaky Test 
kafka.api.SaslPlaintextConsumerTest.testCoordinatorFailover
 Key: KAFKA-9007
 URL: https://issues.apache.org/jira/browse/KAFKA-9007
 Project: Kafka
  Issue Type: Test
  Components: core
Reporter: Bill Bejeck


Failed in 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/25644/testReport/junit/kafka.api/SaslPlaintextConsumerTest/testCoordinatorFailover/]

 
{noformat}
Error Messagejava.lang.AssertionError: expected: but 
was:Stacktracejava.lang.AssertionError: expected: but 
was:
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.failNotEquals(Assert.java:835)
at org.junit.Assert.assertEquals(Assert.java:120)
at org.junit.Assert.assertEquals(Assert.java:146)
at 
kafka.api.AbstractConsumerTest.sendAndAwaitAsyncCommit(AbstractConsumerTest.scala:195)
at 
kafka.api.AbstractConsumerTest.ensureNoRebalance(AbstractConsumerTest.scala:302)
at 
kafka.api.BaseConsumerTest.testCoordinatorFailover(BaseConsumerTest.scala:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle

[jira] [Created] (KAFKA-9008) Flaky Test kafka.api.SaslSslAdminClientIntegrationTest.testDescribeConfigsForTopic

2019-10-09 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9008:
--

 Summary: Flaky Test 
kafka.api.SaslSslAdminClientIntegrationTest.testDescribeConfigsForTopic
 Key: KAFKA-9008
 URL: https://issues.apache.org/jira/browse/KAFKA-9008
 Project: Kafka
  Issue Type: Test
  Components: core
Reporter: Bill Bejeck


Failure seen in 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/25644/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testDescribeConfigsForTopic/]

 
{noformat}
Error Messageorg.junit.runners.model.TestTimedOutException: test timed out 
after 12 
millisecondsStacktraceorg.junit.runners.model.TestTimedOutException: test timed 
out after 12 milliseconds
at java.io.FileDescriptor.sync(Native Method)
at jdbm.recman.TransactionManager.sync(TransactionManager.java:385)
at jdbm.recman.TransactionManager.commit(TransactionManager.java:368)
at jdbm.recman.RecordFile.commit(RecordFile.java:320)
at jdbm.recman.PageManager.commit(PageManager.java:289)
at jdbm.recman.BaseRecordManager.commit(BaseRecordManager.java:419)
at jdbm.recman.CacheRecordManager.commit(CacheRecordManager.java:350)
at 
org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmTable.sync(JdbmTable.java:976)
at 
org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmTable.close(JdbmTable.java:961)
at 
org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmIndex.close(JdbmIndex.java:571)
at 
org.apache.directory.server.core.partition.impl.btree.AbstractBTreePartition.doDestroy(AbstractBTreePartition.java:524)
at 
org.apache.directory.server.core.partition.impl.btree.jdbm.JdbmPartition.doDestroy(JdbmPartition.java:744)
at 
org.apache.directory.server.core.api.partition.AbstractPartition.destroy(AbstractPartition.java:153)
at 
org.apache.directory.server.core.shared.partition.DefaultPartitionNexus.removeContextPartition(DefaultPartitionNexus.java:886)
at 
org.apache.directory.server.core.shared.partition.DefaultPartitionNexus.doDestroy(DefaultPartitionNexus.java:287)
at 
org.apache.directory.server.core.api.partition.AbstractPartition.destroy(AbstractPartition.java:153)
at 
org.apache.directory.server.core.DefaultDirectoryService.shutdown(DefaultDirectoryService.java:1313)
at kafka.security.minikdc.MiniKdc.stop(MiniKdc.scala:278)
at kafka.api.SaslSetup$class.closeSasl(SaslSetup.scala:120)
at 
kafka.api.SaslSslAdminClientIntegrationTest.closeSasl(SaslSslAdminClientIntegrationTest.scala:38)
at 
kafka.api.SaslSslAdminClientIntegrationTest.tearDown(SaslSslAdminClientIntegrationTest.scala:100)
at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Standard OutputDebug is  true storeKey true useTicketCache false useKeyTab true 
doNotPrompt false ticketCache is null isInitiator true KeyTab is 
/tmp/kafka5680437396247072679.tmp refreshKrb5Config is false principal is 
kafka/localh...@example.com tryFirstPass is false useFirstPass is false 
storePass is false clearPass is false
principal is kafka/localh...@example.com
Will use keytab
Commit Succeeded 

Debug is  true storeKey true useTicketCache false useKeyTab true doNotPrompt 
false ticketCache is null isInitiator true KeyTab is 
/tmp/kafka308994997839802486.tmp refreshKrb5Config is false principal is 
clie...@example.com tryFirstPass is false useFirstPass is false storePass is 
false clearPass is false
principal is clie...@example.com
Will use keytab
Commit Succeeded 

Debug is  true storeKey true useTicketCache false useKeyTab true doNotPrompt 
false ticketCache is null isInitiator true KeyTab is 
/tmp/kafka4996638752651080021.tmp refreshKrb5Config is false principal is 
kafka/localh...@example.com tryFirstPass is false useFirstPass is false 
storePass is false clearPass is false
principal is kafka/localh...@example.com
Will use keytab
Comm

[jira] [Created] (KAFKA-9009) Flaky Test kafka.integration.MetricsDuringTopicCreationDeletionTest.testMetricsDuringTopicCreateDelete

2019-10-09 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9009:
--

 Summary: Flaky Test 
kafka.integration.MetricsDuringTopicCreationDeletionTest.testMetricsDuringTopicCreateDelete
 Key: KAFKA-9009
 URL: https://issues.apache.org/jira/browse/KAFKA-9009
 Project: Kafka
  Issue Type: Test
  Components: core
Reporter: Bill Bejeck


Failure seen in 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/25644/testReport/junit/kafka.integration/MetricsDuringTopicCreationDeletionTest/testMetricsDuringTopicCreateDelete/]

 
{noformat}
Error Messagejava.lang.AssertionError: assertion failed: 
UnderReplicatedPartitionCount not 0: 1Stacktracejava.lang.AssertionError: 
assertion failed: UnderReplicatedPartitionCount not 0: 1
at scala.Predef$.assert(Predef.scala:170)
at 
kafka.integration.MetricsDuringTopicCreationDeletionTest.testMetricsDuringTopicCreateDelete(MetricsDuringTopicCreationDeletionTest.scala:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)

[jira] [Created] (KAFKA-9019) Flaky Test kafka.api.SslAdminClientIntegrationTest.testCreateTopicsResponseMetadataAndConfig

2019-10-10 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9019:
--

 Summary: Flaky Test 
kafka.api.SslAdminClientIntegrationTest.testCreateTopicsResponseMetadataAndConfig
 Key: KAFKA-9019
 URL: https://issues.apache.org/jira/browse/KAFKA-9019
 Project: Kafka
  Issue Type: Test
  Components: core
Reporter: Bill Bejeck


Seen in 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/2492/testReport/junit/kafka.api/SslAdminClientIntegrationTest/testCreateTopicsResponseMetadataAndConfig/]
{noformat}
Error Messagejava.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this 
topic-partition.Stacktracejava.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at 
kafka.api.SaslSslAdminClientIntegrationTest.testCreateTopicsResponseMetadataAndConfig(SaslSslAdminClientIntegrationTest.scala:452)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: 
This server does not host this topic-partition.
Standard Output[2019-10-10 05:41:46,134] ERROR [KafkaApi-2] Error when handling 
request: clientId=adminclient-123, correlationId=4, api=CREATE_ACLS, version=1, 
body={creations=[{resource_type=2,resource_name=foobar,resource_pattern_type=3,principal=User:ANONYMOUS,host=*,operation=3,permission_type=3},{resource_type=5,resource_name=transactional_id,resource_pattern_type=3,principal=User:ANONYMOUS,host=*,operation=4,permission_type=3}]}
 (kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=1, connectionId=127.0.0.1:34475-127.0.0.1:42444-0, 
session=Session(User:ANONYMOUS,/127.0.0.1), listenerName=ListenerName(SSL), 
securityProtocol=SSL, buffer=null) is not authorized.
[2019-10-10 05:41:46,136] ERROR [KafkaApi-2] Error when handling request: 
clientId=adminclient-123, correlationId=5, api=DELETE_ACLS, version=1, 
body={filters=[{resource_type=2,resource_name=foobar,resource_pattern_type_filter=3,principal=User:ANONYMOUS,host=*,operation=3,permission_type=3},{resource_type=5,resource_name=transactional_id,resource_pattern_type_filter=3,principal=User:ANONYMOUS,host=*,operation=4,permission_type=3}]}
 (kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=1, connectionId=127.0.0.1:34475-127.0.0.1:42444-0, 
session=Session(User:ANONYMOUS,/127.0.0.1), listenerName=ListenerName(SSL), 
securityProtocol=SSL, buffer=null) is not authorized.
[2019-10-10 05:41:46,241] ERROR [KafkaApi-2] Error when handling request: 
clientId=adminclient-123, correlationId=7, api=DESCRIBE_ACLS, version=1, 
body={resource_type=2,resource_name=*,resource_pattern_type_filter=3,principal=User:*,host=*,operation=2,permission_type=3}
 (kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=1, connectionId=127.0.0.1:34475-127.0.0.1:42444-0, 
session=Session(User:ANONYMOUS,/127.0.0.1), listenerName=ListenerName(SSL), 
securityProtocol=SSL, buffer=null) is not authorized.
[2019-10-10 05:41:46,243] ERROR [KafkaApi-2] Error 

[jira] [Resolved] (KAFKA-9053) AssignmentInfo#encode hardcodes the LATEST_SUPPORTED_VERSION

2019-10-17 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-9053.

Resolution: Fixed

> AssignmentInfo#encode hardcodes the LATEST_SUPPORTED_VERSION 
> -
>
> Key: KAFKA-9053
> URL: https://issues.apache.org/jira/browse/KAFKA-9053
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Sophie Blee-Goldman
>Assignee: Sophie Blee-Goldman
>Priority: Blocker
> Fix For: 2.1.2, 2.2.2, 2.4.0, 2.3.2
>
>
> We should instead encode the commonlySupportedVersion field. This affects 
> version probing with a subscription change



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9072) Add Section to Streams Developer Guide for Topology Naming (KIP-307)

2019-10-21 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9072:
--

 Summary: Add Section to Streams Developer Guide for Topology 
Naming (KIP-307)
 Key: KAFKA-9072
 URL: https://issues.apache.org/jira/browse/KAFKA-9072
 Project: Kafka
  Issue Type: Task
Affects Versions: 2.4.0
Reporter: Bill Bejeck
Assignee: Bill Bejeck
 Fix For: 2.4.0


WIth KIP-307 users can name operators in a topology.  Naming is important as it 
can help with pinning state store, changelog topic, and repartition topic names 
keeping the topology robust in the face of adding/removing operators in a Kafka 
Streams DSL. We should add a section to the developer guide to explain why this 
is important.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9098) Name Repartition Filter, Source, and Sink Processors

2019-10-24 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9098:
--

 Summary: Name Repartition Filter, Source, and Sink Processors
 Key: KAFKA-9098
 URL: https://issues.apache.org/jira/browse/KAFKA-9098
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.3.0, 2.2.0
Reporter: Bill Bejeck
Assignee: Bill Bejeck


When users provide a name for repartition topics, we should the same name as 
the base for the filter, source and sink operators.  While this does not break 
a topology, users providing names for all processors in a DSL topology may find 
the generated names for the repartition topics filter, source, and sink 
operators as inconsistent with the naming approach.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8968) Refactor Task-level Metrics

2019-10-25 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8968.

Resolution: Fixed

> Refactor Task-level Metrics
> ---
>
> Key: KAFKA-8968
> URL: https://issues.apache.org/jira/browse/KAFKA-8968
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.5.0
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
> Fix For: 2.5.0
>
>
> Refactor task-level metrics as proposed in KIP-444.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9077) System Test Failure: StreamsSimpleBenchmarkTest

2019-10-29 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-9077.

Resolution: Fixed

> System Test Failure: StreamsSimpleBenchmarkTest
> ---
>
> Key: KAFKA-9077
> URL: https://issues.apache.org/jira/browse/KAFKA-9077
> Project: Kafka
>  Issue Type: Bug
>  Components: streams, system tests
>Affects Versions: 2.4.0
>Reporter: Manikumar
>Assignee: Bruno Cadonna
>Priority: Minor
> Fix For: 2.5.0
>
>
> StreamsSimpleBenchmarkTest tests are failing on 2.4 and trunk.
> http://confluent-kafka-2-4-system-test-results.s3-us-west-2.amazonaws.com/2019-10-21--001.1571716233--confluentinc--2.4--cb4944f/report.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8980) Refactor State-Store-level Metrics

2019-10-30 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8980.

Resolution: Fixed

> Refactor State-Store-level Metrics
> --
>
> Key: KAFKA-8980
> URL: https://issues.apache.org/jira/browse/KAFKA-8980
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.5.0
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
> Fix For: 2.5.0
>
>
> Refactor state-store-level metrics as proposed in KIP-444.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-9011) Add KStream#flatTransform and KStream#flatTransformValues to Scala API

2019-11-12 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reopened KAFKA-9011:


There's still something left to address on this PR 
[https://github.com/apache/kafka/pull/7520#discussion_r345374820] So I'm 
reopening the ticket.  

> Add KStream#flatTransform and KStream#flatTransformValues to Scala API
> --
>
> Key: KAFKA-9011
> URL: https://issues.apache.org/jira/browse/KAFKA-9011
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Alex Kokachev
>Assignee: Alex Kokachev
>Priority: Major
>  Labels: scala, streams
> Fix For: 2.5.0
>
>
> Part of KIP-313: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-313%3A+Add+KStream.flatTransform+and+KStream.flatTransformValues]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9072) Add Section to Streams Developer Guide for Topology Naming (KIP-307)

2019-11-13 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-9072.

Resolution: Fixed

> Add Section to Streams Developer Guide for Topology Naming (KIP-307)
> 
>
> Key: KAFKA-9072
> URL: https://issues.apache.org/jira/browse/KAFKA-9072
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: Bill Bejeck
>Assignee: Bill Bejeck
>Priority: Major
>  Labels: docs
> Fix For: 2.5.0
>
>
> WIth KIP-307 users can name operators in a topology.  Naming is important as 
> it can help with pinning state store, changelog topic, and repartition topic 
> names keeping the topology robust in the face of adding/removing operators in 
> a Kafka Streams DSL. We should add a section to the developer guide to 
> explain why this is important.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9181) Flaky test kafka.api.SaslGssapiSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe

2019-11-13 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9181:
--

 Summary: Flaky test 
kafka.api.SaslGssapiSslEndToEndAuthorizationTest.testNoConsumeWithoutDescribeAclViaSubscribe
 Key: KAFKA-9181
 URL: https://issues.apache.org/jira/browse/KAFKA-9181
 Project: Kafka
  Issue Type: Test
  Components: core
Reporter: Bill Bejeck


Failed in 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/26571/testReport/junit/kafka.api/SaslGssapiSslEndToEndAuthorizationTest/testNoConsumeWithoutDescribeAclViaSubscribe/]

 
{noformat}
Error Messageorg.apache.kafka.common.errors.TopicAuthorizationException: Not 
authorized to access topics: 
[topic2]Stacktraceorg.apache.kafka.common.errors.TopicAuthorizationException: 
Not authorized to access topics: [topic2]
Standard OutputAdding ACLs for resource `ResourcePattern(resourceType=CLUSTER, 
name=kafka-cluster, patternType=LITERAL)`: 
(principal=User:kafka, host=*, operation=CLUSTER_ACTION, 
permissionType=ALLOW) 

Current ACLs for resource `Cluster:LITERAL:kafka-cluster`: 
User:kafka has Allow permission for operations: ClusterAction from 
hosts: * 

Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=*, 
patternType=LITERAL)`: 
(principal=User:kafka, host=*, operation=READ, permissionType=ALLOW) 

Current ACLs for resource `Topic:LITERAL:*`: 
User:kafka has Allow permission for operations: Read from hosts: * 

Debug is  true storeKey true useTicketCache false useKeyTab true doNotPrompt 
false ticketCache is null isInitiator true KeyTab is 
/tmp/kafka6494439724844851846.tmp refreshKrb5Config is false principal is 
kafka/localh...@example.com tryFirstPass is false useFirstPass is false 
storePass is false clearPass is false
principal is kafka/localh...@example.com
Will use keytab
Commit Succeeded 

[2019-11-13 04:43:16,187] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-11-13 04:43:16,191] ERROR [ReplicaFetcher replicaId=2, leaderId=0, 
fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-11-13 04:43:16,384] ERROR [ReplicaFetcher replicaId=1, leaderId=2, 
fetcherId=0] Error for partition e2etopic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-11-13 04:43:16,384] ERROR [ReplicaFetcher replicaId=0, leaderId=2, 
fetcherId=0] Error for partition e2etopic-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=e2etopic, 
patternType=LITERAL)`: 
(principal=User:client, host=*, operation=WRITE, permissionType=ALLOW)
(principal=User:client, host=*, operation=DESCRIBE, 
permissionType=ALLOW)
(principal=User:client, host=*, operation=CREATE, permissionType=ALLOW) 

Current ACLs for resource `Topic:LITERAL:e2etopic`: 
User:client has Allow permission for operations: Describe from hosts: *
User:client has Allow permission for operations: Write from hosts: *
User:client has Allow permission for operations: Create from hosts: * 

Adding ACLs for resource `ResourcePattern(resourceType=TOPIC, name=e2etopic, 
patternType=LITERAL)`: 
(principal=User:client, host=*, operation=READ, permissionType=ALLOW)
(principal=User:client, host=*, operation=DESCRIBE, 
permissionType=ALLOW) 

Adding ACLs for resource `ResourcePattern(resourceType=GROUP, name=group, 
patternType=LITERAL)`: 
(principal=User:client, host=*, operation=READ, permissionType=ALLOW) 

Current ACLs for resource `Topic:LITERAL:e2etopic`: 
User:client has Allow permission for operations: Read from hosts: *
User:client has Allow permission for operations: Describe from hosts: *
User:client has Allow permission for operations: Write from hosts: *
User:client has Allow permission for operations: Create from hosts: * 

Current ACLs for resource `Group:LITERAL:group`: 
User:client has Allow permission for operations: Read from hosts: * 

Debug is  true storeKey true useTicketCache false useKeyTab true doNotPrompt 
false ticketCache is null isInitiator true KeyTab is 
/tmp/kafka3083328529571706878.tmp refreshKrb5Config is false principal is 
cli...@example.com tryFirstPass is false useFirstPass is false storePass is 
false clearPass is false
principal is cli...@example.com
Will use keytab
Commit Succeeded

[jira] [Created] (KAFKA-9182) Flaky Test org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest.shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosEnabled

2019-11-13 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9182:
--

 Summary: Flaky Test 
org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest.shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosEnabled
 Key: KAFKA-9182
 URL: https://issues.apache.org/jira/browse/KAFKA-9182
 Project: Kafka
  Issue Type: Test
  Components: streams
Reporter: Bill Bejeck


Failed in 
[https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/26571/testReport/junit/org.apache.kafka.streams.integration/KTableSourceTopicRestartIntegrationTest/shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosEnabled/]

 
{noformat}
Error Messagejava.lang.AssertionError: Condition not met within timeout 3. 
Table did not read all valuesStacktracejava.lang.AssertionError: Condition not 
met within timeout 3. Table did not read all values
at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:24)
at 
org.apache.kafka.test.TestUtils.lambda$waitForCondition$4(TestUtils.java:369)
at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:417)
at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:385)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:368)
at org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:356)
at 
org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest.assertNumberValuesRead(KTableSourceTopicRestartIntegrationTest.java:187)
at 
org.apache.kafka.streams.integration.KTableSourceTopicRestartIntegrationTest.shouldRestoreAndProgressWhenTopicWrittenToDuringRestorationWithEosEnabled(KTableSourceTopicRestartIntegrationTest.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
at org.junit.rules.RunRules.evaluate(RunRules.java:20)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionD

[jira] [Created] (KAFKA-9187) kafka.api.SaslGssapiSslEndToEndAuthorizationTest.testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl

2019-11-14 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9187:
--

 Summary: 
kafka.api.SaslGssapiSslEndToEndAuthorizationTest.testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl
 Key: KAFKA-9187
 URL: https://issues.apache.org/jira/browse/KAFKA-9187
 Project: Kafka
  Issue Type: Test
  Components: core
Reporter: Bill Bejeck


Failed in [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/26593/]

 
{noformat}
Error Messageorg.scalatest.exceptions.TestFailedException: Consumed 0 records 
before timeout instead of the expected 1 
recordsStacktraceorg.scalatest.exceptions.TestFailedException: Consumed 0 
records before timeout instead of the expected 1 records
at 
org.scalatest.Assertions$class.newAssertionFailedException(Assertions.scala:530)
at 
org.scalatest.Assertions$.newAssertionFailedException(Assertions.scala:1389)
at org.scalatest.Assertions$class.fail(Assertions.scala:1091)
at org.scalatest.Assertions$.fail(Assertions.scala:1389)
at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:842)
at kafka.utils.TestUtils$.pollRecordsUntilTrue(TestUtils.scala:793)
at 
kafka.utils.TestUtils$.pollUntilAtLeastNumRecords(TestUtils.scala:1334)
at kafka.utils.TestUtils$.consumeRecords(TestUtils.scala:1343)
at 
kafka.api.EndToEndAuthorizationTest.consumeRecords(EndToEndAuthorizationTest.scala:530)
at 
kafka.api.EndToEndAuthorizationTest.testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl(EndToEndAuthorizationTest.scala:369)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:365)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:330)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:78)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:328)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:65)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:292)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:305)
at org.junit.runners.ParentRunner.run(ParentRunner.java:412)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 

[jira] [Created] (KAFKA-9188) Flaky Test SslAdminClientIntegrationTest.testSynchronousAuthorizerAclUpdatesBlockRequestThreads

2019-11-14 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9188:
--

 Summary: Flaky Test 
SslAdminClientIntegrationTest.testSynchronousAuthorizerAclUpdatesBlockRequestThreads
 Key: KAFKA-9188
 URL: https://issues.apache.org/jira/browse/KAFKA-9188
 Project: Kafka
  Issue Type: Test
  Components: core
Reporter: Bill Bejeck


Failed in 
[https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/9373/testReport/junit/kafka.api/SslAdminClientIntegrationTest/testSynchronousAuthorizerAclUpdatesBlockRequestThreads/]

 
{noformat}
Error Messagejava.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Aborted due to 
timeout.Stacktracejava.util.concurrent.ExecutionException: 
org.apache.kafka.common.errors.TimeoutException: Aborted due to timeout.
at 
org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at 
org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at 
org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
at 
kafka.api.SslAdminClientIntegrationTest.$anonfun$testSynchronousAuthorizerAclUpdatesBlockRequestThreads$1(SslAdminClientIntegrationTest.scala:201)
at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
at 
kafka.api.SslAdminClientIntegrationTest.testSynchronousAuthorizerAclUpdatesBlockRequestThreads(SslAdminClientIntegrationTest.scala:201)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:288)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:282)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.apache.kafka.common.errors.TimeoutException: Aborted due to 
timeout.
Standard Output[2019-11-14 15:13:51,489] ERROR [ReplicaFetcher replicaId=1, 
leaderId=2, fetcherId=0] Error for partition mytopic1-1 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-11-14 15:13:51,490] ERROR [ReplicaFetcher replicaId=1, leaderId=0, 
fetcherId=0] Error for partition mytopic1-0 at offset 0 
(kafka.server.ReplicaFetcherThread:76)
org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
does not host this topic-partition.
[2019-11-14 15:14:04,686] ERROR [KafkaApi-2] Error when handling request: 
clientId=adminclient-644, correlationId=4, api=CREATE_ACLS, version=1, 
body={creations=[{resource_type=2,resource_name=foobar,resource_pattern_type=3,principal=User:ANONYMOUS,host=*,operation=3,permission_type=3},{resource_type=5,resource_name=transactional_id,resource_pattern_type=3,principal=User:ANONYMOUS,host=*,operation=4,permission_type=3}]}
 (kafka.server.KafkaApis:76)
org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
Request(processor=1, connectionId=127.0.0.1:41993-127.0.0.1:34770-0, 
session=Session(User:ANONYMOUS,/127.0.0.1), listenerName=ListenerName(SSL), 
securityProtocol=SSL, buffer=null) is not authorized.
[2019-11-14 15:14:04,689] ERROR [KafkaApi-2] Error when handling request: 
clientId=adminclient-644, correlationId=5, api=DELETE_ACLS, version=1, 
body={filters=[{resource_type=2,resource_name=foobar,resource_pattern_type_filter=3,principal=User:ANONYMOUS,host=*,operation=3,permission_type=3},{resource_type=5,resource_name=transactional_id,resource_pattern_type_filter=3,principal=User:ANONYMOUS,host=*,operation=4,permission_type=3}]}
 (kafka.server.KafkaApis:76)
org.apache.kafka.

[jira] [Resolved] (KAFKA-9086) Refactor Processor Node Streams Metrics

2019-11-19 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-9086.

Resolution: Fixed

> Refactor Processor Node Streams Metrics
> ---
>
> Key: KAFKA-9086
> URL: https://issues.apache.org/jira/browse/KAFKA-9086
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: Bruno Cadonna
>Priority: Major
> Fix For: 2.5.0
>
>
> Refactor processor node metrics as described in KIP-444. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9273) Refactor AbstractJoinIntegrationTest and Sub-classes

2019-12-05 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9273:
--

 Summary: Refactor AbstractJoinIntegrationTest and Sub-classes
 Key: KAFKA-9273
 URL: https://issues.apache.org/jira/browse/KAFKA-9273
 Project: Kafka
  Issue Type: Test
  Components: streams
Reporter: Bill Bejeck
Assignee: Bill Bejeck


The  AbstractJoinIntegrationTest uses an embedded broker, but not all the 
sub-classes require the use of an embedded broker anymore.  Additionally, there 
are two test remaining that require an embedded broker, but they don't perform 
joins, the are tests validating other conditions, so ideally those tests should 
move into a separate test



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9294) Enhance DSL Naming Guide to Include All Naming Rules

2019-12-12 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9294:
--

 Summary: Enhance DSL Naming Guide to Include All Naming Rules
 Key: KAFKA-9294
 URL: https://issues.apache.org/jira/browse/KAFKA-9294
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bill Bejeck
Assignee: Bill Bejeck


We already have a naming guide in the docs, but we should expand it to cover 
how all components of the DSL get named.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9398) Kafka Streams main thread may not exit even after close timeout has passed

2020-01-10 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9398:
--

 Summary: Kafka Streams main thread may not exit even after close 
timeout has passed
 Key: KAFKA-9398
 URL: https://issues.apache.org/jira/browse/KAFKA-9398
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bill Bejeck
Assignee: Bill Bejeck
 Fix For: 2.5.0


Kafka Streams offers the {{KafkaStreams.close()}} method when shutting down a 
Kafka Streams application.  There are two overloads to this method, one that 
takes no parameters and another taking a {{Duration}} specifying how long the 
{{close()}} method should block waiting for streams shutdown operations to 
complete.  The no-arg version of {{close()}} sets the timeout to 
{{Long.MAX_VALUE}}.

 

The issue is that if a {{StreamThread}} is some how hung or if one of the 
{{Consumer}} or {{Producer}} clients are in a hung state, the Kafka Streams 
application won't exit even after the specified timeout has expired.

For example consider this scenario:
 # A sink topic gets deleted by accident 
 # The {{Producer max.block.ms}} config is set to high value

In this case the {{Producer}} will issue a {{WARN}} logging statement and will 
continue to make metadata requests looking for the expected topic.  This will 
continue up until the {{max.block.ms}} expires.  If this value is high enough, 
calling {{close()}} with a timeout won't fix the issue as when the timeout 
expires, the Kafka Streams application main thread won't exit.

To prevent this type of issue, we should call {{Thread.interrupt()}} on all 
{{StreamThread}} instances once the {{close()}} timeout has expired. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10404) Flaky Test kafka.api.SaslSslConsumerTest.testCoordinatorFailover

2020-08-14 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-10404:
---

 Summary: Flaky Test 
kafka.api.SaslSslConsumerTest.testCoordinatorFailover
 Key: KAFKA-10404
 URL: https://issues.apache.org/jira/browse/KAFKA-10404
 Project: Kafka
  Issue Type: Test
  Components: core, unit tests
Reporter: Bill Bejeck


>From build [https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/3829/]

 
{noformat}
kafka.api.SaslSslConsumerTest > testCoordinatorFailover FAILED
11:27:15 java.lang.AssertionError: expected: but 
was:
11:27:15 at org.junit.Assert.fail(Assert.java:89)
11:27:15 at org.junit.Assert.failNotEquals(Assert.java:835)
11:27:15 at org.junit.Assert.assertEquals(Assert.java:120)
11:27:15 at org.junit.Assert.assertEquals(Assert.java:146)
11:27:15 at 
kafka.api.AbstractConsumerTest.sendAndAwaitAsyncCommit(AbstractConsumerTest.scala:195)
11:27:15 at 
kafka.api.AbstractConsumerTest.ensureNoRebalance(AbstractConsumerTest.scala:302)
11:27:15 at 
kafka.api.BaseConsumerTest.testCoordinatorFailover(BaseConsumerTest.scala:76)
11:27:15 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
11:27:15 at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
11:27:15 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
11:27:15 at java.lang.reflect.Method.invoke(Method.java:498)
11:27:15 at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
11:27:15 at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
11:27:15 at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
11:27:15 at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
11:27:15 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
11:27:15 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
11:27:15 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
11:27:15 at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
11:27:15 at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
11:27:15 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
11:27:15 at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
11:27:15 at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
11:27:15 at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
11:27:15 at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
11:27:15 at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
11:27:15 at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
11:27:15 at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
11:27:15 at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
11:27:15 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
11:27:15 at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
11:27:15 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
11:27:15 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
11:27:15 at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
11:27:15 at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
11:27:15 at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
11:27:15 at sun.reflect.GeneratedMethodAccessor26.invoke(Unknown Source)
11:27:15 at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
11:27:15 at java.lang.reflect.Method.invoke(Method.java:498)
11:27:15 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
11:27:15 at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
11:27:15 at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
11:27:15 at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
11:27:15 at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
11:27:15 at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:1

[jira] [Created] (KAFKA-10405) Flaky Test org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest.shouldRestoreState

2020-08-14 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-10405:
---

 Summary: Flaky Test 
org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest.shouldRestoreState
 Key: KAFKA-10405
 URL: https://issues.apache.org/jira/browse/KAFKA-10405
 Project: Kafka
  Issue Type: Test
  Components: streams
Reporter: Bill Bejeck


>From build [https://builds.apache.org/job/kafka-pr-jdk14-scala2.13/1979/]

 
{noformat}
org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest > 
shouldRestoreState FAILED
14:25:19 java.lang.AssertionError: Condition not met within timeout 6. 
Repartition topic 
restore-test-KSTREAM-AGGREGATE-STATE-STORE-02-repartition not purged 
data after 6 ms.
14:25:19 at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:26)
14:25:19 at 
org.apache.kafka.test.TestUtils.lambda$waitForCondition$6(TestUtils.java:401)
14:25:19 at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:449)
14:25:19 at 
org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:417)
14:25:19 at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:398)
14:25:19 at 
org.apache.kafka.test.TestUtils.waitForCondition(TestUtils.java:388)
14:25:19 at 
org.apache.kafka.streams.integration.PurgeRepartitionTopicIntegrationTest.shouldRestoreState(PurgeRepartitionTopicIntegrationTest.java:206){noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9273) Refactor AbstractJoinIntegrationTest and Sub-classes

2020-08-15 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-9273.

Resolution: Fixed

> Refactor AbstractJoinIntegrationTest and Sub-classes
> 
>
> Key: KAFKA-9273
> URL: https://issues.apache.org/jira/browse/KAFKA-9273
> Project: Kafka
>  Issue Type: Test
>  Components: streams
>Affects Versions: 2.7.0
>Reporter: Bill Bejeck
>Assignee: Albert Lowis
>Priority: Major
>  Labels: newbie
> Fix For: 2.7.0
>
>
> The  AbstractJoinIntegrationTest uses an embedded broker, but not all the 
> sub-classes require the use of an embedded broker anymore.  Additionally, 
> there are two test remaining that require an embedded broker, but they don't 
> perform joins, the are tests validating other conditions, so ideally those 
> tests should move into a separate test



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8821) Avoid pattern subscription to allow for stricter ACL settings

2020-01-24 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8821.

Resolution: Fixed

Resolved via [https://github.com/apache/kafka/pull/7969]

> Avoid pattern subscription to allow for stricter ACL settings
> -
>
> Key: KAFKA-8821
> URL: https://issues.apache.org/jira/browse/KAFKA-8821
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Sophie Blee-Goldman
>Priority: Minor
> Fix For: 2.5.0
>
>
> To avoid triggering auto topic creation (if `auto.create.topic.enable=true` 
> on the brokers), Kafka Streams uses consumer pattern subscription. For this 
> case, the consumer requests all metadata from the brokers and does client 
> side filtering.
> However, if users want to set ACL to restrict a Kafka Streams application, 
> this may results in broker side ERROR logs that some metadata cannot be 
> provided. The only way to avoid those broker side ERROR logs is to grant 
> corresponding permissions.
> As of 2.3 release it's possible to disable auto topic creation client side 
> (via https://issues.apache.org/jira/browse/KAFKA-7320). Kafka Streams should 
> use this new feature (note, that broker version 0.11 is required) to allow 
> users to set strict ACLs without getting flooded with ERROR logs on the 
> broker.
> The proposal is that by default Kafka Streams disables auto-topic create 
> client side (optimistically) and uses regular subscription (not pattern 
> subscription). If an older broker is used, users need to explicitly enable 
> `allow.auto.create.topic` client side. If we detect this setting, we switch 
> back to pattern based subscription.
> If users don't enable auto topic create client side and run with an older 
> broker, we would just rethrow the exception to the user, adding some context 
> information on how to fix the issue. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-7317) Use collections subscription for main consumer to reduce metadata

2020-01-24 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-7317.

Resolution: Fixed

> Use collections subscription for main consumer to reduce metadata
> -
>
> Key: KAFKA-7317
> URL: https://issues.apache.org/jira/browse/KAFKA-7317
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Sophie Blee-Goldman
>Priority: Major
> Fix For: 2.5.0
>
>
> In KAFKA-4633 we switched from "collection subscription" to "pattern 
> subscription" for `Consumer#subscribe()` to avoid triggering auto topic 
> creating on the broker. In KAFKA-5291, the metadata request was extended to 
> overwrite the broker config within the request itself. However, this feature 
> is only used in `KafkaAdminClient`. KAFKA-7320 adds this feature for the 
> consumer client, too.
> This ticket proposes to use the new feature within Kafka Streams to allow the 
> usage of collection based subscription in consumer and admit clients to 
> reduce the metadata response size than can be very large for large number of 
> partitions in the cluster.
> Note, that Streams need to be able to distinguish if it connects to older 
> brokers that do not support the new metadata request and still use pattern 
> subscription for this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-9152) Improve Sensor Retrieval

2020-01-24 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-9152.

Resolution: Fixed

> Improve Sensor Retrieval 
> -
>
> Key: KAFKA-9152
> URL: https://issues.apache.org/jira/browse/KAFKA-9152
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Bruno Cadonna
>Assignee: highluck
>Priority: Minor
>  Labels: newbie, tech-debt
> Fix For: 2.5.0
>
>
> This ticket shall improve two aspects of the retrieval of sensors:
> 1. Currently, when a sensor is retrieved with {{*Metrics.*Sensor()}} (e.g. 
> {{ThreadMetrics.createTaskSensor()}}) after it was created with the same 
> method {{*Metrics.*Sensor()}}, the sensor is added again to the corresponding 
> queue in {{*Sensors}} (e.g. {{threadLevelSensors}}) in 
> {{StreamsMetricsImpl}}. Those queues are used to remove the sensors when 
> {{removeAll*LevelSensors()}} is called. Having multiple times the same 
> sensors in this queue is not an issue from a correctness point of view. 
> However, it would reduce the footprint to only store a sensor once in those 
> queues.
> 2. When a sensor is retrieved, the current code attempts to create a new 
> sensor and to add to it again the corresponding metrics. This could be 
> avoided.
>  
> Both aspects could be improved by checking whether a sensor already exists by 
> calling {{getSensor()}} on the {{Metrics}} object and checking the return 
> value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9530) Flaky Test kafka.admin.DescribeConsumerGroupTest.testDescribeGroupWithShortInitializationTimeout

2020-02-09 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9530:
--

 Summary: Flaky Test 
kafka.admin.DescribeConsumerGroupTest.testDescribeGroupWithShortInitializationTimeout
 Key: KAFKA-9530
 URL: https://issues.apache.org/jira/browse/KAFKA-9530
 Project: Kafka
  Issue Type: Test
  Components: core
Reporter: Bill Bejeck


[https://builds.apache.org/job/kafka-pr-jdk11-scala2.13/4570/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeGroupWithShortInitializationTimeout/]

 
{noformat}
Error Messagejava.lang.AssertionError: assertion 
failedStacktracejava.lang.AssertionError: assertion failed
at scala.Predef$.assert(Predef.scala:267)
at 
kafka.admin.DescribeConsumerGroupTest.testDescribeGroupWithShortInitializationTimeout(DescribeConsumerGroupTest.scala:585)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.runTestClass(JUnitTestClassExecutor.java:110)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:58)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecutor.execute(JUnitTestClassExecutor.java:38)
at 
org.gradle.api.internal.tasks.testing.junit.AbstractJUnitTestClassProcessor.processTestClass(AbstractJUnitTestClassProcessor.java:62)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
at jdk.internal.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:33)
at 
org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:94)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:118)
at jdk.internal.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:36)
at 
org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.internal.remote.internal.hub.MessageHubBack

[jira] [Resolved] (KAFKA-9533) ValueTransform forwards `null` values

2020-02-19 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-9533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-9533.

Resolution: Fixed

> ValueTransform forwards `null` values
> -
>
> Key: KAFKA-9533
> URL: https://issues.apache.org/jira/browse/KAFKA-9533
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.4.0
>Reporter: Michael Viamari
>Assignee: Michael Viamari
>Priority: Minor
>
> According to the documentation for `KStream#transformValues`, nulls returned 
> from `ValueTransformer#transform` are not forwarded. (see 
> [KStream#transformValues|https://kafka.apache.org/24/javadoc/org/apache/kafka/streams/kstream/KStream.html#transformValues-org.apache.kafka.streams.kstream.ValueTransformerSupplier-java.lang.String...-]
> However, this does not appear to be the case. In 
> `KStreamTransformValuesProcessor#process` the result of the transform is 
> forwarded directly.
> {code:java}
>  @Override
>  public void process(final K key, final V value) {
>  context.forward(key, valueTransformer.transform(key, value));
>  }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9798) org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldAllowConcurrentAccesses

2020-04-01 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9798:
--

 Summary: 
org.apache.kafka.streams.integration.QueryableStateIntegrationTest.shouldAllowConcurrentAccesses
 Key: KAFKA-9798
 URL: https://issues.apache.org/jira/browse/KAFKA-9798
 Project: Kafka
  Issue Type: Test
  Components: streams
Reporter: Bill Bejeck






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9976) Aggregates should reuse repartition nodes

2020-05-09 Thread Bill Bejeck (Jira)
Bill Bejeck created KAFKA-9976:
--

 Summary: Aggregates should reuse repartition nodes
 Key: KAFKA-9976
 URL: https://issues.apache.org/jira/browse/KAFKA-9976
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: Bill Bejeck
Assignee: Bill Bejeck


The `GroupedStreamAggregateBuilder` will re-use the repartition node if the 
user provides a name via `Grouped`, otherwise it will create a new repartition 
node.  The fix in KAFKA-9298 results in reusing the repartition node for 
KStream objects performing multiple joins, so the `KGroupedStream` should 
follow the same pattern and reuse the repartition node when a `KGroupedStream` 
needs repartitioning and performs multiple aggregates.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (KAFKA-10017) Flaky Test EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta

2020-10-01 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-10017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck reopened KAFKA-10017:
-

> Flaky Test EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta
> ---
>
> Key: KAFKA-10017
> URL: https://issues.apache.org/jira/browse/KAFKA-10017
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.0
>Reporter: Sophie Blee-Goldman
>Assignee: Sophie Blee-Goldman
>Priority: Blocker
>  Labels: flaky-test, unit-test
> Fix For: 2.6.0
>
>
> Creating a new ticket for this since the root cause is different than 
> https://issues.apache.org/jira/browse/KAFKA-9966
> With injectError = true:
> h3. Stacktrace
> java.lang.AssertionError: Did not receive all 20 records from topic 
> multiPartitionOutputTopic within 6 ms Expected: is a value equal to or 
> greater than <20> but: <15> was less than <20> at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20) at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.lambda$waitUntilMinKeyValueRecordsReceived$1(IntegrationTestUtils.java:563)
>  at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:429)
>  at 
> org.apache.kafka.test.TestUtils.retryOnExceptionWithTimeout(TestUtils.java:397)
>  at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:559)
>  at 
> org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitUntilMinKeyValueRecordsReceived(IntegrationTestUtils.java:530)
>  at 
> org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.readResult(EosBetaUpgradeIntegrationTest.java:973)
>  at 
> org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.verifyCommitted(EosBetaUpgradeIntegrationTest.java:961)
>  at 
> org.apache.kafka.streams.integration.EosBetaUpgradeIntegrationTest.shouldUpgradeFromEosAlphaToEosBeta(EosBetaUpgradeIntegrationTest.java:427)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8269) Flaky Test TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8269.

Resolution: Duplicate

> Flaky Test 
> TopicCommandWithAdminClientTest#testDescribeUnderMinIsrPartitionsMixed
> -
>
> Key: KAFKA-8269
> URL: https://issues.apache.org/jira/browse/KAFKA-8269
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3573/tests]
> {quote}java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:87)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at org.junit.Assert.assertTrue(Assert.java:53)
> at 
> kafka.admin.TopicCommandWithAdminClientTest.testDescribeUnderMinIsrPartitionsMixed(TopicCommandWithAdminClientTest.scala:659){quote}
> It's a long LOG. This might be interesting:
> {quote}[2019-04-20 21:30:37,936] ERROR [ReplicaFetcher replicaId=4, 
> leaderId=5, fetcherId=0] Error for partition 
> testCreateWithReplicaAssignment-0cpsXnG35w-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-20 21:30:48,600] WARN Unable to read additional data from client 
> sessionid 0x10510a59d3c0004, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376)
> [2019-04-20 21:30:48,908] WARN Unable to read additional data from client 
> sessionid 0x10510a59d3c0003, likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376)
> [2019-04-20 21:30:48,919] ERROR [RequestSendThread controllerId=0] Controller 
> 0 fails to send a request to broker localhost:43520 (id: 5 rack: rack3) 
> (kafka.controller.RequestSendThread:76)
> java.lang.InterruptedException
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
> at kafka.utils.ShutdownableThread.pause(ShutdownableThread.scala:75)
> at 
> kafka.controller.RequestSendThread.backoff$1(ControllerChannelManager.scala:224)
> at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:252)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
> [2019-04-20 21:30:48,920] ERROR [RequestSendThread controllerId=0] Controller 
> 0 fails to send a request to broker localhost:33570 (id: 4 rack: rack3) 
> (kafka.controller.RequestSendThread:76)
> java.lang.InterruptedException
> at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1326)
> at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
> at kafka.utils.ShutdownableThread.pause(ShutdownableThread.scala:75)
> at 
> kafka.controller.RequestSendThread.backoff$1(ControllerChannelManager.scala:224)
> at 
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:252)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
> [2019-04-20 21:31:28,942] ERROR [ReplicaFetcher replicaId=3, leaderId=1, 
> fetcherId=0] Error for partition under-min-isr-topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-04-20 21:31:28,973] ERROR [ReplicaFetcher replicaId=0, leaderId=1, 
> fetcherId=0] Error for partition under-min-isr-topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8141) Flaky Test FetchRequestDownConversionConfigTest#testV1FetchWithDownConversionDisabled

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8141.

Resolution: Fixed

> Flaky Test 
> FetchRequestDownConversionConfigTest#testV1FetchWithDownConversionDisabled
> -
>
> Key: KAFKA-8141
> URL: https://issues.apache.org/jira/browse/KAFKA-8141
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
> Fix For: 2.7.0, 2.6.1
>
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.server/FetchRequestDownConversionConfigTest/testV1FetchWithDownConversionDisabled/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.api.IntegrationTestHarness.doSetup(IntegrationTestHarness.scala:95) at 
> kafka.api.IntegrationTestHarness.setUp(IntegrationTestHarness.scala:73){quote}
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8138) Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8138.

Resolution: Fixed

> Flaky Test PlaintextConsumerTest#testFetchRecordLargerThanFetchMaxBytes
> ---
>
> Key: KAFKA-8138
> URL: https://issues.apache.org/jira/browse/KAFKA-8138
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/PlaintextConsumerTest/testFetchRecordLargerThanFetchMaxBytes/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT (truncated)
> {quote}[2019-03-20 16:10:19,759] ERROR [ReplicaFetcher replicaId=2, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,760] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,963] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,964] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:10:19,975] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8075) Flaky Test GroupAuthorizerIntegrationTest#testTransactionalProducerTopicAuthorizationExceptionInCommit

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8075.

Resolution: Fixed

> Flaky Test 
> GroupAuthorizerIntegrationTest#testTransactionalProducerTopicAuthorizationExceptionInCommit
> --
>
> Key: KAFKA-8075
> URL: https://issues.apache.org/jira/browse/KAFKA-8075
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/56/testReport/junit/kafka.api/GroupAuthorizerIntegrationTest/testTransactionalProducerTopicAuthorizationExceptionInCommit/]
> {quote}org.apache.kafka.common.errors.TimeoutException: Timeout expired while 
> initializing transactional state in 3000ms.{quote}
> STDOUT
> {quote}[2019-03-08 01:48:45,226] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Offset commit failed on partition topic-0 at offset 5: Not 
> authorized to access topics: [Topic authorization failed.] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:812) 
> [2019-03-08 01:48:45,227] ERROR [Consumer clientId=consumer-99, 
> groupId=my-group] Not authorized to commit to topics [topic] 
> (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:850) 
> [2019-03-08 01:48:57,870] ERROR [KafkaApi-0] Error when handling request: 
> clientId=0, correlationId=0, api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=43610,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:43610-127.0.0.1:44870-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:14,858] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=44107,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:44107-127.0.0.1:38156-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:21,984] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=39025,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:39025-127.0.0.1:41474-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. [2019-03-08 01:49:39,438] ERROR [KafkaApi-0] 
> Error when handling request: clientId=0, correlationId=0, 
> api=UPDATE_METADATA, 
> body=\{controller_id=0,controller_epoch=1,broker_epoch=25,topic_states=[],live_brokers=[{id=0,end_points=[{port=44798,host=localhost,listener_name=PLAINTEXT,security_protocol_type=0}],rack=null}]}
>  (kafka.server.KafkaApis:76) 
> org.apache.kafka.common.errors.ClusterAuthorizationException: Request 
> Request(processor=0, connectionId=127.0.0.1:44798-127.0.0.1:58496-0, 
> session=Session(Group:testGroup,/127.0.0.1), 
> listenerName=ListenerName(PLAINTEXT), securityProtocol=PLAINTEXT, 
> buffer=null) is not authorized. Error: Consumer group 'my-group' does not 
> exist. [2019-03-08 01:49:55,502] WARN Ignoring unexpected runtime exception 
> (org.apache.zookeeper.server.NIOServerCnxnFactory:236) 
> java.nio.channels.CancelledKeyException at 
> sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73) at 
> sun.nio.ch.SelectionKeyImpl.readyOps(SelectionKeyImpl.java:87) at 
> org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:205)
>  at java.lang.Thread.run(Thread.java:748) [2019-03-08 01:50:02,720] WARN 
> Unable to read additional data from client sessionid 0x1007131d81c0001, 
> likely client has closed socket 
> (org.apache.zookeeper.server.NIOServerCnxn:376) [2019-03-08 01:50:03,855] 
> ERROR [KafkaApi-0] Error when handling request: clientId=

[jira] [Resolved] (KAFKA-8087) Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8087.

Resolution: Fixed

> Flaky Test PlaintextConsumerTest#testConsumingWithNullGroupId
> -
>
> Key: KAFKA-8087
> URL: https://issues.apache.org/jira/browse/KAFKA-8087
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.api/PlaintextConsumerTest/testConsumingWithNullGroupId/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at kafka.api.BaseConsumerTest.setUp(BaseConsumerTest.scala:69){quote}
> STDOUT
> {quote}[2019-03-09 08:39:02,022] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=2, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,022] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,202] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,204] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,236] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,511] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:02,512] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topicWithNewMessageFormat-1 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:06,568] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:09,582] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-09 08:39:09,787] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0

[jira] [Resolved] (KAFKA-8077) Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8077.

Resolution: Fixed

> Flaky Test AdminClientIntegrationTest#testConsumeAfterDeleteRecords
> ---
>
> Key: KAFKA-8077
> URL: https://issues.apache.org/jira/browse/KAFKA-8077
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.0.1
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.0-jdk8/detail/kafka-2.0-jdk8/237/tests]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
> at 
> org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest$$anonfun$sendRecords$1.apply(AdminClientIntegrationTest.scala:994)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.api.AdminClientIntegrationTest.sendRecords(AdminClientIntegrationTest.scala:994)
> at 
> kafka.api.AdminClientIntegrationTest.testConsumeAfterDeleteRecords(AdminClientIntegrationTest.scala:909)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.kafka.common.errors.UnknownTopicOrPartitionException: 
> This server does not host this topic-partition.{quote}
> STDERR
> {quote}Exception in thread "Thread-1638" 
> org.apache.kafka.common.errors.InterruptException: 
> java.lang.InterruptedException
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeThrowInterruptException(ConsumerNetworkClient.java:504)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:287)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:242)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1247)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
> at 
> kafka.api.AdminClientIntegrationTest$$anon$1.run(AdminClientIntegrationTest.scala:1132)
> Caused by: java.lang.InterruptedException
> ... 7 more{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8113) Flaky Test ListOffsetsRequestTest#testResponseIncludesLeaderEpoch

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8113?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8113.

Resolution: Fixed

> Flaky Test ListOffsetsRequestTest#testResponseIncludesLeaderEpoch
> -
>
> Key: KAFKA-8113
> URL: https://issues.apache.org/jira/browse/KAFKA-8113
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3468/tests]
> {quote}java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:87)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at org.junit.Assert.assertTrue(Assert.java:53)
> at 
> kafka.server.ListOffsetsRequestTest.fetchOffsetAndEpoch$1(ListOffsetsRequestTest.scala:136)
> at 
> kafka.server.ListOffsetsRequestTest.testResponseIncludesLeaderEpoch(ListOffsetsRequestTest.scala:151){quote}
> STDOUT
> {quote}[2019-03-15 17:16:13,029] ERROR [ReplicaFetcher replicaId=2, 
> leaderId=1, fetcherId=0] Error for partition topic-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-15 17:16:13,231] ERROR [KafkaApi-0] Error while responding to offset 
> request (kafka.server.KafkaApis:76)
> org.apache.kafka.common.errors.ReplicaNotAvailableException: Partition 
> topic-0 is not available{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8079) Flaky Test EpochDrivenReplicationProtocolAcceptanceTest#shouldSurviveFastLeaderChange

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8079.

Resolution: Fixed

> Flaky Test 
> EpochDrivenReplicationProtocolAcceptanceTest#shouldSurviveFastLeaderChange
> -
>
> Key: KAFKA-8079
> URL: https://issues.apache.org/jira/browse/KAFKA-8079
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/blue/organizations/jenkins/kafka-trunk-jdk8/detail/kafka-trunk-jdk8/3445/tests]
> {quote}java.lang.AssertionError
> at org.junit.Assert.fail(Assert.java:87)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at org.junit.Assert.assertTrue(Assert.java:53)
> at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.$anonfun$shouldSurviveFastLeaderChange$2(EpochDrivenReplicationProtocolAcceptanceTest.scala:294)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158)
> at 
> kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest.shouldSurviveFastLeaderChange(EpochDrivenReplicationProtocolAcceptanceTest.scala:273){quote}
> STDOUT
> {quote}[2019-03-08 01:16:02,452] ERROR [ReplicaFetcher replicaId=101, 
> leaderId=100, fetcherId=0] Error for partition topic1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-08 01:16:23,677] ERROR [ReplicaFetcher replicaId=101, leaderId=100, 
> fetcherId=0] Error for partition topic1-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76)
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition.
> [2019-03-08 01:16:35,779] ERROR [Controller id=100] Error completing 
> preferred replica leader election for partition topic1-0 
> (kafka.controller.KafkaController:76)
> kafka.common.StateChangeFailedException: Failed to elect leader for partition 
> topic1-0 under strategy PreferredReplicaPartitionLeaderElectionStrategy
> at 
> kafka.controller.PartitionStateMachine.$anonfun$doElectLeaderForPartitions$9(PartitionStateMachine.scala:390)
> at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
> at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
> at 
> kafka.controller.PartitionStateMachine.doElectLeaderForPartitions(PartitionStateMachine.scala:388)
> at 
> kafka.controller.PartitionStateMachine.electLeaderForPartitions(PartitionStateMachine.scala:315)
> at 
> kafka.controller.PartitionStateMachine.doHandleStateChanges(PartitionStateMachine.scala:225)
> at 
> kafka.controller.PartitionStateMachine.handleStateChanges(PartitionStateMachine.scala:141)
> at 
> kafka.controller.KafkaController.kafka$controller$KafkaController$$onPreferredReplicaElection(KafkaController.scala:649)
> at 
> kafka.controller.KafkaController.$anonfun$checkAndTriggerAutoLeaderRebalance$6(KafkaController.scala:1008)
> at scala.collection.immutable.Map$Map1.foreach(Map.scala:128)
> at 
> kafka.controller.KafkaController.kafka$controller$KafkaController$$checkAndTriggerAutoLeaderRebalance(KafkaController.scala:989)
> at 
> kafka.controller.KafkaController$AutoPreferredReplicaLeaderElection$.process(KafkaController.scala:1020)
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.$anonfun$doWork$1(ControllerEventManager.scala:95)
> at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
> at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:31)
> at 
> kafka.controller.ControllerEventManager$ControllerEventThread.doWork(ControllerEventManager.scala:95)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:89)
> Dumping /tmp/kafka-2158669830092629415/topic1-0/.log
> Starting offset: 0
> baseOffset: 0 lastOffset: 0 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 0 CreateTime: 1552007783877 size: 141 magic: 
> 2 compresscodec: SNAPPY crc: 2264724941 isvalid: true
> baseOffset: 1 lastOffset: 1 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 141 CreateTime: 1552007784731 size: 141 
> magic: 2 compresscodec: SNAPPY crc: 14988968 isvalid: true
> baseOffset: 2 lastOffset: 2 count: 1 baseSequence: -1 lastSequence: -1 
> producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: 
> false isControl: false position: 282 CreateTime: 1552007784734 si

[jira] [Resolved] (KAFKA-7988) Flaky Test DynamicBrokerReconfigurationTest#testThreadPoolResize

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-7988.

Resolution: Fixed

> Flaky Test DynamicBrokerReconfigurationTest#testThreadPoolResize
> 
>
> Key: KAFKA-7988
> URL: https://issues.apache.org/jira/browse/KAFKA-7988
> Project: Kafka
>  Issue Type: Bug
>  Components: core, unit tests
>Affects Versions: 2.2.0, 2.3.0
>Reporter: Matthias J. Sax
>Assignee: Rajini Sivaram
>Priority: Critical
>  Labels: flaky-test
>
> To get stable nightly builds for `2.2` release, I create tickets for all 
> observed test failures.
> [https://builds.apache.org/blue/organizations/jenkins/kafka-2.2-jdk8/detail/kafka-2.2-jdk8/30/]
> {quote}kafka.server.DynamicBrokerReconfigurationTest > testThreadPoolResize 
> FAILED java.lang.AssertionError: Invalid threads: expected 6, got 5: 
> List(ReplicaFetcherThread-0-0, ReplicaFetcherThread-0-1, 
> ReplicaFetcherThread-0-0, ReplicaFetcherThread-0-2, ReplicaFetcherThread-0-1) 
> at org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.assertTrue(Assert.java:41) at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyThreads(DynamicBrokerReconfigurationTest.scala:1260)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.maybeVerifyThreadPoolSize$1(DynamicBrokerReconfigurationTest.scala:531)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.resizeThreadPool$1(DynamicBrokerReconfigurationTest.scala:550)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.reducePoolSize$1(DynamicBrokerReconfigurationTest.scala:536)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.$anonfun$testThreadPoolResize$3(DynamicBrokerReconfigurationTest.scala:559)
>  at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:158) at 
> kafka.server.DynamicBrokerReconfigurationTest.verifyThreadPoolResize$1(DynamicBrokerReconfigurationTest.scala:558)
>  at 
> kafka.server.DynamicBrokerReconfigurationTest.testThreadPoolResize(DynamicBrokerReconfigurationTest.scala:572){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8303) Flaky Test SaslSslAdminClientIntegrationTest#testLogStartOffsetCheckpoint

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8303.

Resolution: Fixed

> Flaky Test SaslSslAdminClientIntegrationTest#testLogStartOffsetCheckpoint
> -
>
> Key: KAFKA-8303
> URL: https://issues.apache.org/jira/browse/KAFKA-8303
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, security, unit tests
>Affects Versions: 2.3.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/21274/testReport/junit/kafka.api/SaslSslAdminClientIntegrationTest/testLogStartOffsetCheckpoint/]
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.kafka.common.errors.TimeoutException: Aborted due to timeout. at 
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> kafka.api.AdminClientIntegrationTest$$anonfun$testLogStartOffsetCheckpoint$2.apply$mcZ$sp(AdminClientIntegrationTest.scala:820)
>  at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:789) at 
> kafka.api.AdminClientIntegrationTest.testLogStartOffsetCheckpoint(AdminClientIntegrationTest.scala:813){quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8108) Flaky Test kafka.api.ClientIdQuotaTest.testThrottledProducerConsumer

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8108.

Resolution: Fixed

> Flaky Test kafka.api.ClientIdQuotaTest.testThrottledProducerConsumer
> 
>
> Key: KAFKA-8108
> URL: https://issues.apache.org/jira/browse/KAFKA-8108
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 2.3.0
>Reporter: Guozhang Wang
>Priority: Critical
>  Labels: flaky-test
>
> {code}
> java.lang.AssertionError: Client with id=QuotasTestProducer-!@#$%^&*() should 
> have been throttled
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at 
> kafka.api.QuotaTestClients.verifyThrottleTimeMetric(BaseQuotaTest.scala:229)
>   at 
> kafka.api.QuotaTestClients.verifyProduceThrottle(BaseQuotaTest.scala:215)
>   at 
> kafka.api.BaseQuotaTest.testThrottledProducerConsumer(BaseQuotaTest.scala:82)
> {code}
> https://builds.apache.org/job/kafka-pr-jdk11-scala2.12/3230/testReport/junit/kafka.api/ClientIdQuotaTest/testThrottledProducerConsumer/



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-8137) Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8137.

Resolution: Fixed

> Flaky Test LegacyAdminClientTest#testOffsetsForTimesWhenOffsetNotFound
> --
>
> Key: KAFKA-8137
> URL: https://issues.apache.org/jira/browse/KAFKA-8137
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/80/testReport/junit/kafka.api/LegacyAdminClientTest/testOffsetsForTimesWhenOffsetNotFound/]
> {quote}java.lang.AssertionError: Partition [topic,0] metadata not propagated 
> after 15000 ms at kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.integration.KafkaServerTestHarness.createTopic(KafkaServerTestHarness.scala:125)
>  at 
> kafka.api.LegacyAdminClientTest.setUp(LegacyAdminClientTest.scala:73){quote}
> STDOUT
> {quote}[2019-03-20 16:28:10,089] ERROR [ReplicaFetcher replicaId=1, 
> leaderId=0, fetcherId=0] Error for partition __consumer_offsets-0 at offset 0 
> (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,093] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:10,303] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Error for partition 
> topic-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,493] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:14,724] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> topic-1 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,388] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:28:21,394] ERROR 
> [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,224] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:48,249] ERROR 
> [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error for partition 
> __consumer_offsets-0 at offset 0 (kafka.server.ReplicaFetcherThread:76) 
> org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server 
> does not host this topic-partition. [2019-03-20 16:29:49,255] ERROR 
> [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0

[jira] [Resolved] (KAFKA-8084) Flaky Test DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers

2020-10-09 Thread Bill Bejeck (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Bejeck resolved KAFKA-8084.

Resolution: Fixed

> Flaky Test 
> DescribeConsumerGroupTest#testDescribeMembersOfExistingGroupWithNoMembers
> 
>
> Key: KAFKA-8084
> URL: https://issues.apache.org/jira/browse/KAFKA-8084
> Project: Kafka
>  Issue Type: Bug
>  Components: admin, unit tests
>Affects Versions: 2.2.0
>Reporter: Matthias J. Sax
>Priority: Critical
>  Labels: flaky-test
>
> [https://jenkins.confluent.io/job/apache-kafka-test/job/2.2/62/testReport/junit/kafka.admin/DescribeConsumerGroupTest/testDescribeMembersOfExistingGroupWithNoMembers/]
> {quote}java.lang.AssertionError: Partition [__consumer_offsets,0] metadata 
> not propagated after 15000 ms at 
> kafka.utils.TestUtils$.fail(TestUtils.scala:381) at 
> kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:791) at 
> kafka.utils.TestUtils$.waitUntilMetadataIsPropagated(TestUtils.scala:880) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3(TestUtils.scala:318) at 
> kafka.utils.TestUtils$.$anonfun$createTopic$3$adapted(TestUtils.scala:317) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:237) at 
> scala.collection.immutable.Range.foreach(Range.scala:158) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:237) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:230) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> kafka.utils.TestUtils$.createTopic(TestUtils.scala:317) at 
> kafka.utils.TestUtils$.createOffsetsTopic(TestUtils.scala:375) at 
> kafka.admin.DescribeConsumerGroupTest.testDescribeMembersOfExistingGroupWithNoMembers(DescribeConsumerGroupTest.scala:283){quote}
> STDOUT
> {quote}TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST 
> CLIENT-ID foo 0 0 0 0 - - - TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG 
> CONSUMER-ID HOST CLIENT-ID foo 0 0 0 0 - - - COORDINATOR (ID) 
> ASSIGNMENT-STRATEGY STATE #MEMBERS localhost:45812 (0) Empty 0{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


<    1   2   3   4   >