Finally managed to get it working, although not fully working :(

First: rsyslog wasn't adding topics to kafka, cause I was using "@" within topic names, and that's an unsupported character, and maybe because I omitted :9092

Second: logstash is still not able to connect with kafka, cause:

11:41:02.175 [[main]<kafka] WARN org.apache.kafka.clients.ClientUtils - Removing server from bootstrap.servers as DNS resolution failed: cluster_kafka:9092

Probably because "cluster_kafka" is the cluster alias, but no a valid broker name/IP, because I deployed kafka using "docker service".
Yet have to figure it out.

Thanks!

El 31/01/17 a las 10:22, mosto...@gmail.com escribió:
El 30/01/17 a las 19:25, Andrew Griffin escribió:
I have a rsyslog -> kafka -> splunk stack working pretty well, I could probably answer a few of your questions -

You can list topics (and a lot of other stuff) on the kafka brokers themselves using the kafka-topics.sh script included with kafka. e.g.:

bin/kafka-topics.sh —zookeeper=localhost:2181 —list
only __consumer_offsets is shown, so probably I'm not adding topics correctly

bin/kafka-topics.sh —zookeeper=locahost:2181 —topic “topic” —describe
Topic: __consumer_offsets Partition: 0 Leader: 9 Replicas: 9,10,15 Isr: 10,9,15 Topic: __consumer_offsets Partition: 1 Leader: 10 Replicas: 10,15,9 Isr: 10,9,15 Topic: __consumer_offsets Partition: 2 Leader: 15 Replicas: 15,9,10 Isr: 10,9,15
...
Topic: __consumer_offsets Partition: 47 Leader: 15 Replicas: 15,10,9 Isr: 10,9,15 Topic: __consumer_offsets Partition: 48 Leader: 9 Replicas: 9,10,15 Isr: 10,9,15 Topic: __consumer_offsets Partition: 49 Leader: 10 Replicas: 10,15,9 Isr: 10,9,15

does this mean the cluster is properly formed?

I’d recommend using kafka-manager to manage your cluster. It’ll give you a much quicker look in to your topics, your brokers, consumers, and throughput. It also makes creating and deleting topics easy.
It isn't able to show cluster list, so perhaps problems connecting to zk?

If you’re not seeing your topics get created the first place I’d look is in the kafka broker logs themselves - server.log and kafkaServer.out - then work your way back from there. As you’ve found, omkafka isn’t terribly verbose when it comes to error reporting.
plenty of:
[2017-01-30 16:35:05,177] INFO [Group Metadata Manager on Broker 15]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager) [2017-01-30 16:45:05,177] INFO [Group Metadata Manager on Broker 15]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager) [2017-01-30 16:55:05,177] INFO [Group Metadata Manager on Broker 15]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)



For your timeout issues, the first place I’d look to is the local firewall configuration. Also in your "broker=["cluster_kafka”]” portion are you including the port number for the broker (I’m assuming 9092)?
I thought port was added by default. I'll try again.

Thanks!

_______________________________________________
rsyslog mailing list
http://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards
NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of 
sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE 
THAT.

Reply via email to