[GitHub] [kafka] chia7712 commented on a change in pull request #11719: MINOR: remove redundant argument in logging

2022-01-29 Thread GitBox


chia7712 commented on a change in pull request #11719:
URL: https://github.com/apache/kafka/pull/11719#discussion_r795139659



##
File path: 
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java
##
@@ -718,7 +718,7 @@ public void handle(JoinGroupResponse joinResponse, 
RequestFuture fut
 .setGenerationId(generation.generationId)
 .setAssignments(groupAssignmentList)
 );
-log.debug("Sending leader SyncGroup to coordinator {}: {}", 
this.coordinator, this.generation, requestBuilder);
+log.debug("Sending leader SyncGroup to coordinator {}: {}", 
this.coordinator, this.generation);

Review comment:
   oh, you are right




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dajac commented on a change in pull request #11719: MINOR: remove redundant argument in logging

2022-01-29 Thread GitBox


dajac commented on a change in pull request #11719:
URL: https://github.com/apache/kafka/pull/11719#discussion_r795139019



##
File path: 
clients/src/main/java/org/apache/kafka/clients/consumer/internals/AbstractCoordinator.java
##
@@ -718,7 +718,7 @@ public void handle(JoinGroupResponse joinResponse, 
RequestFuture fut
 .setGenerationId(generation.generationId)
 .setAssignments(groupAssignmentList)
 );
-log.debug("Sending leader SyncGroup to coordinator {}: {}", 
this.coordinator, this.generation, requestBuilder);
+log.debug("Sending leader SyncGroup to coordinator {}: {}", 
this.coordinator, this.generation);

Review comment:
   I think the intent was to remove ‘generation’ in the original PR.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-13152) Replace "buffered.records.per.partition" with "input.buffer.max.bytes"

2022-01-29 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen updated KAFKA-13152:
--
Fix Version/s: 3.2.0

> Replace "buffered.records.per.partition" with "input.buffer.max.bytes" 
> ---
>
> Key: KAFKA-13152
> URL: https://issues.apache.org/jira/browse/KAFKA-13152
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Sagar Rao
>Priority: Major
>  Labels: needs-kip
> Fix For: 3.2.0
>
>
> The current config "buffered.records.per.partition" controls how many records 
> in maximum to bookkeep, and hence it is exceed we would pause fetching from 
> this partition. However this config has two issues:
> * It's a per-partition config, so the total memory consumed is dependent on 
> the dynamic number of partitions assigned.
> * Record size could vary from case to case.
> And hence it's hard to bound the memory usage for this buffering. We should 
> consider deprecating that config with a global, e.g. "input.buffer.max.bytes" 
> which controls how much bytes in total is allowed to be buffered. This is 
> doable since we buffer the raw records in .



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] showuon commented on a change in pull request #11690: KAFKA-13599: Upgrade RocksDB to 6.27.3

2022-01-29 Thread GitBox


showuon commented on a change in pull request #11690:
URL: https://github.com/apache/kafka/pull/11690#discussion_r795136975



##
File path: 
streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBGenericOptionsToDbOptionsColumnFamilyOptionsAdapter.java
##
@@ -1662,16 +1673,95 @@ public ConcurrentTaskLimiter compactionThreadLimiter() {
 return columnFamilyOptions.compactionThreadLimiter();
 }
 
+@Override
 public Options setCompactionFilter(final AbstractCompactionFilter> compactionFilter) {
 columnFamilyOptions.setCompactionFilter(compactionFilter);
 return this;
 }
 
+@Override
 public Options setCompactionFilterFactory(final 
AbstractCompactionFilterFactory> 
compactionFilterFactory) {
 
columnFamilyOptions.setCompactionFilterFactory(compactionFilterFactory);
 return this;
 }
 
+@Override
+public Options setEnableBlobFiles(final boolean enableBlobFiles) {

Review comment:
   Started here, they are options for blobsDB. In the upstream 
`Options.java`, there are comments at the beginning and the end of the options. 
Could we add something similar here?
   ```java
//
 // BEGIN options for blobs (integrated BlobDB)
 //
   ```
   ```java
//
 // END options for blobs (integrated BlobDB)
 //
   ```
   ref: 
https://github.com/facebook/rocksdb/blob/main/java/src/main/java/org/rocksdb/Options.java#L1996-L1998
   
   Thanks.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] dongjinleekr opened a new pull request #11720: KAFKA-13625: Fix inconsistency in dynamic application log levels

2022-01-29 Thread GitBox


dongjinleekr opened a new pull request #11720:
URL: https://github.com/apache/kafka/pull/11720


   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [kafka] chia7712 opened a new pull request #11719: MINOR: remove redundant argument in logging

2022-01-29 Thread GitBox


chia7712 opened a new pull request #11719:
URL: https://github.com/apache/kafka/pull/11719


   related to #11451
   
   ### Committer Checklist (excluded from commit message)
   - [ ] Verify design and implementation 
   - [ ] Verify test coverage and CI build status
   - [ ] Verify documentation (including upgrade notes)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (KAFKA-13624) Add Metric for Store Cache Size

2022-01-29 Thread Sagar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sagar Rao updated KAFKA-13624:
--
External issue URL:   (was: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-818%3A+Introduce+cache-size-bytes-total+Task+Level+Metric)

> Add Metric for Store Cache Size
> ---
>
> Key: KAFKA-13624
> URL: https://issues.apache.org/jira/browse/KAFKA-13624
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sagar Rao
>Assignee: Sagar Rao
>Priority: Major
>  Labels: needs-kip
>
> KIP-770 introduced a new metric called `{*}input-buffer-bytes-total`{*} to 
> track the total amount of bytes accumulated by a task. While working through 
> it's PR, it was suggested to add a similar metric for 
> *cache-size-bytes-total* to track the cache size in bytes for a task. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Updated] (KAFKA-13624) Add Metric for Store Cache Size

2022-01-29 Thread Sagar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sagar Rao updated KAFKA-13624:
--
External issue URL: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-818%3A+Introduce+cache-size-bytes-total+Task+Level+Metric

> Add Metric for Store Cache Size
> ---
>
> Key: KAFKA-13624
> URL: https://issues.apache.org/jira/browse/KAFKA-13624
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Sagar Rao
>Assignee: Sagar Rao
>Priority: Major
>  Labels: needs-kip
>
> KIP-770 introduced a new metric called `{*}input-buffer-bytes-total`{*} to 
> track the total amount of bytes accumulated by a task. While working through 
> it's PR, it was suggested to add a similar metric for 
> *cache-size-bytes-total* to track the cache size in bytes for a task. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Resolved] (KAFKA-13152) Replace "buffered.records.per.partition" with "input.buffer.max.bytes"

2022-01-29 Thread Sagar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sagar Rao resolved KAFKA-13152.
---
Resolution: Done

> Replace "buffered.records.per.partition" with "input.buffer.max.bytes" 
> ---
>
> Key: KAFKA-13152
> URL: https://issues.apache.org/jira/browse/KAFKA-13152
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Sagar Rao
>Priority: Major
>  Labels: needs-kip
>
> The current config "buffered.records.per.partition" controls how many records 
> in maximum to bookkeep, and hence it is exceed we would pause fetching from 
> this partition. However this config has two issues:
> * It's a per-partition config, so the total memory consumed is dependent on 
> the dynamic number of partitions assigned.
> * Record size could vary from case to case.
> And hence it's hard to bound the memory usage for this buffering. We should 
> consider deprecating that config with a global, e.g. "input.buffer.max.bytes" 
> which controls how much bytes in total is allowed to be buffered. This is 
> doable since we buffer the raw records in .



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [kafka] showuon commented on pull request #11691: KAFKA-13598: enable idempotence producer by default and validate the configs

2022-01-29 Thread GitBox


showuon commented on pull request #11691:
URL: https://github.com/apache/kafka/pull/11691#issuecomment-1024866088


   @hachikuji @kirktrue @mimaison , thanks for your comments. I've updated the 
PR. 
   @ijuma , I've added an entry talking about this issue in notable change for 
3.2.0 in `upgrade.html`.
   
   Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (KAFKA-13602) Allow to broadcast a result record

2022-01-29 Thread Sagar Rao (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sagar Rao reassigned KAFKA-13602:
-

Assignee: Sagar Rao

> Allow to broadcast a result record
> --
>
> Key: KAFKA-13602
> URL: https://issues.apache.org/jira/browse/KAFKA-13602
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Sagar Rao
>Priority: Major
>  Labels: needs-kip, newbie++
>
> From time to time, users ask how they can send a record to more than one 
> partition in a sink topic. Currently, this is only possible by replicate the 
> message N times before the sink and use a custom partitioner to write the N 
> messages into the N different partitions.
> It might be worth to make this easier and add a new feature for it. There are 
> multiple options:
>  * extend `to()` / `addSink()` with a "broadcast" option/config
>  * add `toAllPartitions()` / `addBroadcastSink()` methods
>  * allow StreamPartitioner to return `-1` for "all partitions"
>  * extend `StreamPartitioner` to allow returning more than one partition (ie 
> a list/collection of integers instead of a single int)
> The first three options imply that a "full broadcast" is supported only, so 
> it's less flexible. On the other hand, it's easier to use (especially the 
> first two options are easy as they do not require to implement a custom 
> partitioner).
> The last option would be most flexible and also allow for a "partial 
> broadcast" (aka multi-cast) pattern. It might also be possible to combine two 
> options, or maye even a totally different idea.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)