[GitHub] [kafka] akatona84 commented on pull request #13733: KAFKA-13337: fix of possible java.nio.file.AccessDeniedException during Connect plugin directory scan
akatona84 commented on PR #13733: URL: https://github.com/apache/kafka/pull/13733#issuecomment-1558629671 > Is there any way to decide if a specific file/dir is meant to be a plugin? currently the code is only checking whether it is a dir or the extension is zip, jar or class. Yet for an unreadable one, you can't decide anything, even these. Actual plugin scanning is done bit later Issues can be: io exceptions - reading problems url/path problems - bad paths were given (don't see other possible cases) It would be a breaking change to prevent Connect to start in case of errors like these. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] akatona84 commented on pull request #13733: KAFKA-13337: fix of possible java.nio.file.AccessDeniedException during Connect plugin directory scan
akatona84 commented on PR #13733: URL: https://github.com/apache/kafka/pull/13733#issuecomment-1556885401 I just wrote down my motivation to exclude unreadable ones from plugin path. This specific case can be easily solved with better dir organizing. But you made me think what I really wanted. :) IMO the issue is around skipping ALL plugins (or at least the following ones after the faulty) to be loaded whenever there's a bad one. And not failing the startup, just ignoring them all. (in my case mm2 relied on a config provider plugin hence it failed to start) With the current code if an admin is adding a plugin wrong the connect cluster can startup but the connectors/config providers will fail. So either - we should fail the herder startup entirely OR - do the error handling properly, per plugin. I'm overwriting the PR aiming this behavior, and curious what you guys think about it and my argument. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[GitHub] [kafka] akatona84 commented on pull request #13733: KAFKA-13337: fix of possible java.nio.file.AccessDeniedException during Connect plugin directory scan
akatona84 commented on PR #13733: URL: https://github.com/apache/kafka/pull/13733#issuecomment-1554580666 Without filtering out non-readable ones it fails later but getting ignored, it won't load any plugins, not just the problematic one skipped. around here: https://github.com/apache/kafka/blob/3109e9c843e33057dd5d823c50c41fb91dc1a8fc/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/isolation/DelegatingClassLoader.java#L269 So we should at least load the plugins what we have access to. This is the scenario: - plugins are located in /var/lib/kafka - also this happens to be the kafka user's home, yet it's world-readable - an only kafka readable directory was put there (.pki) - mirrormaker2 uses /var/lib/kafka too but mm2 is executed by another user than kafka - mm2 failed to load plugins (any) because of this unreadable .pki dir, and fails to start because its config has entries which would need (the currently not-loaded) config-providers to resolve This was the motivation to do the ticket and the PR. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org