[ https://issues.apache.org/jira/browse/KAFKA-6400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16390270#comment-16390270 ]
Khaireddine Rezgui edited comment on KAFKA-6400 at 3/7/18 10:06 PM: -------------------------------------------------------------------- I got sometimes the same experience in my first using of kafka stream, i understand now the issue. Does CACHE_MAX_BYTES_BUFFERING_CONFIG refer to the config mentioned in the description ? was (Author: khairy): I got sometimes the same experience in my first using of kafka stream, i understand now the issue. Does CACHE_MAX_BYTES_BUFFERING_CONFIG is the config mentioned in the description ? > Consider setting default cache size to zero in Kafka Streams > ------------------------------------------------------------ > > Key: KAFKA-6400 > URL: https://issues.apache.org/jira/browse/KAFKA-6400 > Project: Kafka > Issue Type: Improvement > Components: streams > Affects Versions: 1.0.0 > Reporter: Matthias J. Sax > Priority: Minor > > Since the introduction of record caching in Kafka Streams DSL, we see regular > reports/questions of first times users about "Kafka Streams does not emit > anything" or "Kafka Streams loses messages". Those report are subject to > record caching but no bugs and indicate bad user experience. > We might consider setting the default cache size to zero to avoid those > issues and improve the experience for first time users. This hold especially > for simple word-count-demos (Note, many people don't copy out example > word-count but build their own first demo app.) > Remark: before we had caching, many users got confused about our update > semantics and that we emit an output record for each input record for > windowed aggregation (ie, please give me the "final" result"). Thus, we need > to consider this and judge with care to not go "forth and back" with default > user experience -- we did have less questions about this behavior lately. -- This message was sent by Atlassian JIRA (v7.6.3#76005)