[ 
https://issues.apache.org/jira/browse/KAFKA-7509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17509095#comment-17509095
 ] 

Matthias J. Sax commented on KAFKA-7509:
----------------------------------------

{quote} # Breaking backwards compatibility for a lot of Kafka components 
(producer/consumer interceptors, Connect REST extensions, etc.{quote}
Why? Could we not feature flag it (disable by default to preserve backward 
compatibility; maybe log a WARN message if not enabled; have a path forward to 
enable by default in some future major release)?
{quote} # Complicating the configuration behavior for applications with this 
type of "nested" configuration surface{quote}
Yes, it's a trade-off between flexibility and complexity – personally, I think 
that for most cases deep nesting won't be the case, and single-level nesting 
seems not to be too complicated and more flexible compare to a "config 
enable/disable warn-logging".
{quote} # Makes it impossible to have sub-configured components that are aware 
of their parent's configuration (for example, you could no longer have a 
Connect REST extension that's aware of the entire Connect worker's config 
without duplicating those same properties into the sub-config for that REST 
extension){quote}
This is a use-case I never encountered before. From an encapsulation point of 
view, I am wondering why an inner component would need the config of the outer 
one? Can you elaborate?
{quote}Ultimately, this still seems a little too aggressive of a change for the 
problem that it's trying to solve. If we were redesigning these APIs from the 
ground up, it would certainly be beneficial, but considering how much has been 
built on top of these APIs already and how much work it'd take for users to 
adjust to the proposed changes, it doesn't seem like a friendly tradeoff. Plus, 
for some situations (like the example with REST extensions), it's unclear how 
we'd want to proceed.
{quote}
Yeah, that always tricky. Personally, I tend to prefer "the right" solution 
even if it might be more complex to get there. But I don't feel too strong 
about it either. I agree it's "just" about log messages, so maybe not worth it. 
My personal concern with disabling (either completely via config or move to 
DEBUG) is that actual useful warning would be affected, too. It just seems to 
be too coarse grained – for this case, I was rather have spurious / annoying 
WARNs than allow to disable them.

But I guess details would need to get discussed on a KIP anyway... 

> Kafka Connect logs unnecessary warnings about unused configurations
> -------------------------------------------------------------------
>
>                 Key: KAFKA-7509
>                 URL: https://issues.apache.org/jira/browse/KAFKA-7509
>             Project: Kafka
>          Issue Type: Improvement
>          Components: clients, KafkaConnect
>    Affects Versions: 0.10.2.0
>            Reporter: Randall Hauch
>            Priority: Major
>
> When running Connect, the logs contain quite a few warnings about "The 
> configuration '{}' was supplied but isn't a known config." This occurs when 
> Connect creates producers, consumers, and admin clients, because the 
> AbstractConfig is logging unused configuration properties upon construction. 
> It's complicated by the fact that the Producer, Consumer, and AdminClient all 
> create their own AbstractConfig instances within the constructor, so we can't 
> even call its {{ignore(String key)}} method.
> See also KAFKA-6793 for a similar issue with Streams.
> There are no arguments in the Producer, Consumer, or AdminClient constructors 
> to control  whether the configs log these warnings, so a simpler workaround 
> is to only pass those configuration properties to the Producer, Consumer, and 
> AdminClient that the ProducerConfig, ConsumerConfig, and AdminClientConfig 
> configdefs know about.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to