connecting to multiple kafka clusters with kerberos enabled
We need to use the following configuration to connect to kafka cluster , I should I configure a single client to connect to multiple clusters with different JAAS configuration. Is it possible to do that ? KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/etc/security/keytabs/kafka_client.keytab" principal="kafka-clien...@example.com"; }; The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Running Zookeeper on same instance as kafka brokers in multi node cluster
Hi Team, What are the downsides of installing Zookeeper and kafka on same machine, in multi broker environment? We are trying to install Zookeeper and kafka in AWS world and its becoming difficult for us to maintain ZK and Kafka with some issues. Also re-provisioning ZK and Kafka instances separately is getting complicated. Thanks, Prabhakar The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
kafka 0.9 running on AWS environment caching zookeeper IP instead of using the DNS
Hi All, We are using kafka version 0.9 and we are facing issues, in kafka connecting to zookeeper. Here are mode details: Zookeeper is in front AWS Elastic Load Balancer and the ELB has been associated with a DNS name, say zookerper1.com. Our broker configuration looks the below zookeeper.connect= zookerper1.com:2181, zookerper2.com:2181,zookerper3.com:2181 But what’s happening is Kafka is storing the IP of the zookeeper it first resolves and is using the same for connecting to zookeeper. Some times our ELB Ip address are changing and hence kafka is never able to connect to zookeeper. Is there any configuration we can do to kafka to use the DNS name of zookeeper always ? Thanks, Prabhakar The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
keeping a high value for zookeeper.connection.timeout.ms
Hi All, Right now we are running kafka in AWS EC2 servers and zookeeper is also running on separate EC2 instances. We have created a service (system units ) for kafka and zookeeper to make sure that they are started in case the server gets rebooted. The problem is sometimes zookeeper severs are little late in starting and kafka brokers by that time getting terminated. So to deal with this issue we are planning to increase the zookeeper.connection.timeout.ms to some high number like 10 mins, at the broker side. Is this a good approach ? Thanks, Prabhakar The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Using automatic brokerId generation
Hi , I am right now using kafka version 0.9.1.0 If I choose to enable automatic brokerId generation, and let’s say if one of my broker dies and a new broker gets started with a different brokerId. Is there a way I can get the new broker Id part of the replica set of a partition automatically? Or is that I need to run the kafka-reassign-partitions.sh script for that to happen? Thanks, PD The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Kafka In cloud environment
In case I use automatic brokerId generation and if a broker dies, and a new broker is added with a different broker Id . Will the replica set gets updated automatically ? The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.
Using the new consumer client API 0.0.9
Hi All, We are planning to use consumer client API 0.0.9, but the kafka documentation still says its beta quality. Is there any timeline by when it becomes production ready ? The reason why we want to go with 0.9 is to avoid rework of migrating from 0.8.X client to 0.9. Note: we are running 0.9 of kafka server Thanks Prabhakar The information contained in this e-mail is confidential and/or proprietary to Capital One and/or its affiliates and may only be used solely in performance of work or services for Capital One. The information transmitted herewith is intended only for use by the individual or entity to which it is addressed. If the reader of this message is not the intended recipient, you are hereby notified that any review, retransmission, dissemination, distribution, copying or other use of, or taking of any action in reliance upon this information is strictly prohibited. If you have received this communication in error, please contact the sender and delete the material from your computer.