[GitHub] [kafka-site] guozhangwang merged pull request #418: Add atguigu(http://www.atguigu.com/)to the list pf the "Powered By ❤"
guozhangwang merged PR #418: URL: https://github.com/apache/kafka-site/pull/418 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Created] (KAFKA-14029) Consumer response serialization could block other response handlers at scale
BugFinder created KAFKA-14029: - Summary: Consumer response serialization could block other response handlers at scale Key: KAFKA-14029 URL: https://issues.apache.org/jira/browse/KAFKA-14029 Project: Kafka Issue Type: Improvement Components: consumer Affects Versions: 3.2.0 Reporter: BugFinder Hi, We have been using our in-house tools to test Kafka's scalability to have an idea of how a large-scale deployment will work and where are the bottlenecks. For now, we are looking at version 3.2 and focused in a many-consumers scenario. Consumer-wise, we want to report a possible issue and eventually propose a solution, aiming to build our expertise in the system. When adding a new consumer to a group, the code path org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle *// (has a synchronized block)* org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onLeaderElected org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onLeaderElected org.apache.kafka.clients.consumer.internals.ConsumerProtocol.serializeAssignment org.apache.kafka.clients.consumer.internals.ConsumerProtocol.serializeAssignment org.apache.kafka.common.protocol.MessageUtil.toVersionPrefixedByteBuffer *// linear on size of message* could end up being costly when the size of the message is large. toVersionPrefixedByteBuffer seems to be linear in the size of the message, and albeit writing an array in linear time is not unreasonable at all, {*}under certain conditions, e.g. when under locks{*}, it can cause {*}undesired contention{*}. In this case, its invoked to serialize the assigment when adding a new consumer (on where there is another loop wraping up this path that seems to depend on the number of assignments, which could be another problematic dimension if growing causing undesired nesting), here [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onLeaderElected] // ... Map groupAssignment = new HashMap<>(); for (Map.Entry assignmentEntry : assignments.entrySet()) { ByteBuffer buffer = *ConsumerProtocol.serializeAssignment(assignmentEntry.getValue()); // calls toVersionPrefixedByteBuffer* groupAssignment.put(assignmentEntry.getKey(), buffer); } // ... The question here is, {*}is there a need to serialize the assignment inside the synchronized block{*}? if that assignment is too large, it could easily add a few seconds to the request and block others that use the same lock, like HeartbeatResponseHandler. -- This message was sent by Atlassian Jira (v8.20.7#820007)
Kafka Log4J vulnerabilities - Urgent
Hi Team, Trust you are doing good and I hope I'm mailing the correct DL (if not kindly point me to one) ! This mail is w.r.t Kafka Log4j vulnerabilities. PFB the description - Log4J 1.x vulnerability with Kafka is a known vulnerability. The published workaround is to remove the Appender Classes from the JAR artefact. This has already been implemented by DevOps team Kafka documentation referred from here - https://kafka.apache.org/cve-list However our Corporate Security Team wants Log4j 1.x versions to be completely removed and/or upgraded to log4j 2.x. We have not come across any published set up steps from Kafka documentation. There is one blog that talks about upgrade proposal but we are unsure whether it can be implemented(Blog link below) - https://cwiki.apache.org/confluence/display/KAFKA/KIP-719%3A+Deprecate+Log4J+Appender#KIP719:DeprecateLog4JAppender-1.Deprecatelog4j-appender Please advice the best way forward. This is a crucial issue and we are getting daily follow ups from the Security Teams . Thanks, Mayank This e-mail and any files transmitted with it are for the sole use of the intended recipient(s) and may contain confidential and privileged information. If you are not the intended recipient(s), please reply to the sender and destroy all copies of the original message. Any unauthorized review, use, disclosure, dissemination, forwarding, printing or copying of this email, and/or any action taken in reliance on the contents of this e-mail is strictly prohibited and may be unlawful. Where permitted by applicable law, this e-mail and other e-mail communications sent to and from Cognizant e-mail addresses may be monitored.
[jira] [Created] (KAFKA-14028) Add audit log in kafka server when clients try to fetch
Justinwins created KAFKA-14028: -- Summary: Add audit log in kafka server when clients try to fetch Key: KAFKA-14028 URL: https://issues.apache.org/jira/browse/KAFKA-14028 Project: Kafka Issue Type: New Feature Reporter: Justinwins Sometimes it's pretty useful to know who are consuming data from the brokers ,espically when a company tries to do some audition. So when a consumer succeeds to connect the server , print out the client ip from the Session to the log. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[jira] [Created] (KAFKA-14027) org.apache.kafka.connect.mirror.MirrorClient class clean up
Justinwins created KAFKA-14027: -- Summary: org.apache.kafka.connect.mirror.MirrorClient class clean up Key: KAFKA-14027 URL: https://issues.apache.org/jira/browse/KAFKA-14027 Project: Kafka Issue Type: New Feature Components: KafkaConnect Reporter: Justinwins 1) {code:java} // code placeholder public Set upstreamClusters() throws InterruptedException { return listTopics().stream() .filter(this::isHeartbeatTopic) .flatMap(x -> allSources(x).stream()) .distinct() .collect(Collectors.toSet()); } {code} no need to user `distinct()` 2) we can use try-with-resources instead of try finally for remoteConsumerOffsets method -- This message was sent by Atlassian Jira (v8.20.7#820007)
Re: [VOTE] KIP-840: Config file option for MessageReader/MessageFormatter in ConsoleProducer/ConsoleConsumer
Hello! A little ping on this vote. Thanks. Le jeu. 16 juin 2022 à 16:36, Alexandre Garnier a écrit : > Hi everyone. > > Anyone wants to give a last binding vote for this KIP? > > Thanks. > > Le mar. 7 juin 2022 à 14:53, Alexandre Garnier a > écrit : > >> Hi! >> >> A little reminder to vote for this KIP. >> >> Thanks. >> >> >> Le mer. 1 juin 2022 à 10:58, Alexandre Garnier a >> écrit : >> > >> > Hi everyone! >> > >> > I propose to start voting for KIP-840: >> > https://cwiki.apache.org/confluence/x/bBqhD >> > >> > Thanks, >> > -- >> > Alex >> >
[jira] [Created] (KAFKA-14026) ConsumerGroupReplicationPolicy (counterpart to DefaultReplicationPolicy for topic) can be added
Justinwins created KAFKA-14026: -- Summary: ConsumerGroupReplicationPolicy (counterpart to DefaultReplicationPolicy for topic) can be added Key: KAFKA-14026 URL: https://issues.apache.org/jira/browse/KAFKA-14026 Project: Kafka Issue Type: New Feature Components: mirrormaker Reporter: Justinwins There has been a feature which allows renaming downstream topic names. The core interface is DefaultReplicationPolicy. Maybe a new feature which allows renaming downstream consumer group names can be developed. I think this is a very userful feature .MM2 users can CHOOSE to rename or not , especially when there have been consumer groups with the same names in the target cluster as those in the source cluster. A KIP may be needed. -- This message was sent by Atlassian Jira (v8.20.7#820007)
[FINAL CALL] - Travel Assistance to ApacheCon New Orleans 2022
To all committers and non-committers. This is a final call to apply for travel/hotel assistance to get to and stay in New Orleans for ApacheCon 2022. Applications have been extended by one week and so the application deadline is now the 8th July 2022. The rest of this email is a copy of what has been sent out previously. We will be supporting ApacheCon North America in New Orleans, Louisiana, on October 3rd through 6th, 2022. TAC exists to help those that would like to attend ApacheCon events, but are unable to do so for financial reasons. This year, We are supporting both committers and non-committers involved with projects at the Apache Software Foundation, or open source projects in general. For more info on this year's applications and qualifying criteria, please visit the TAC website at http://www.apache.org/travel/ Applications have been extended until the 8th of July 2022. Important: Applicants have until the closing date above to submit their applications (which should contain as much supporting material as required to efficiently and accurately process their request), this will enable TAC to announce successful awards shortly afterwards. As usual, TAC expects to deal with a range of applications from a diverse range of backgrounds. We therefore encourage (as always) anyone thinking about sending in an application to do so ASAP. Why should you attend as a TAC recipient? We encourage you to read stories from past recipients at https://apache.org/travel/stories/ . Also note that previous TAC recipients have gone on to become Committers, PMC Members, ASF Members, Directors of the ASF Board and Infrastructure Staff members. Others have gone from Committer to full time Open Source Developers! How far can you go! - Let TAC help get you there. === Gavin McDonald on behalf of the Travel Assistance Committee.
[GitHub] [kafka-site] PhilippB21 commented on pull request #418: Add atguigu(http://www.atguigu.com/)to the list pf the "Powered By ❤"
PhilippB21 commented on PR #418: URL: https://github.com/apache/kafka-site/pull/418#issuecomment-1166965966 @realdengziqi Sorry, bur I am not authorized to do a review and so far I have only submitted our company logo. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org