[
https://issues.apache.org/jira/browse/STREAMPIPES-388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17366643#comment-17366643
]
Simon Bosse commented on STREAMPIPES-388:
-----------------------------------------
Hi, we are using your helm chart with the version 0.67 .
We did not change anything regarding Kafka in the helm chart. The only which is
different from the standard configuration is that we
had to deploy the backend pod and the pipeline-elements-all* pods on the same
node, in order to get it working properly.
These are the environment variables of the kafka pod:
- name: KAFKA_PORT
value: "9092"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper:2181
- name: KAFKA_LISTENERS
value: PLAINTEXT://:9092
- name: KAFKA_ADVERTISED_LISTENERS
value: PLAINTEXT://kafka:9092
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: PLAINTEXT
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: PLAINTEXT:PLAINTEXT
- name: KAFKA_MESSAGE_MAX_BYTES
value: "5000012"
- name: KAFKA_FETCH_MESSAGE_MAX_BYTES
value: "5000012"
- name: KAFKA_REPLICA_FETCH_MAX_BYTES
value: "10000000"
> Pipeline stops writing data to mysql inside kubernetes
> -------------------------------------------------------
>
> Key: STREAMPIPES-388
> URL: https://issues.apache.org/jira/browse/STREAMPIPES-388
> Project: StreamPipes
> Issue Type: Bug
> Reporter: Simon Bosse
> Priority: Major
>
> Hi,
> we are using streampipes in kubernetes where we have a mqtt broker in the
> same namespace which is used in a pipeline together with a mysql sink.
> The pipeline works and data is written into the database but in the morning
> around the same time there is no data written anymore into the database. If
> the pipeline is manually restarted it works again. I looked in the logs of
> the connect worker pod and I can see these kind of entries around the time
> when the pipeline stops.
> 04:08:32.561 SP [kafka-admin-client-thread | adminclient-12] WARN
> o.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-12]
> Error connecting to node kafka:9092 (id: 1001 rack: null)
> java.net.UnknownHostException: kafka
> at java.net.InetAddress.getAllByName0(InetAddress.java:1281)
> at java.net.InetAddress.getAllByName(InetAddress.java:1193)
> at java.net.InetAddress.getAllByName(InetAddress.java:1127)
> at org.apache.kafka.clients.ClientUtils.resolve(ClientUtils.java:117)
> at
> org.apache.kafka.clients.ClusterConnectionStates$NodeConnectionState.moveToNextAddress(ClusterConnectionStates.java:387)
> at
> org.apache.kafka.clients.ClusterConnectionStates.connecting(ClusterConnectionStates.java:121)
> at
> org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:917)
> at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:287)
> at
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.sendEligibleCalls(KafkaAdminClient.java:904)
> at
> org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1119)
> at java.lang.Thread.run(Thread.java:823)
>
> I also looked in the logs of the kafka container but there are no entries at
> this time.
> Do you know what could be the reason for this or how I can further debug it?
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)