Hello!

I'm having trouble when deploying a new version of a service during the
re-balancing step where the topology doesn't match what KafkaStreams
library assumes and there's a NPE while creating tasks.

Background info:
I'm running a Spring Boot service which utilizes KafkaStreams, currently
subscribed to two topics that has 10 partitions each. The service is
running in 2 instances for increased reliability and load balancing.
In the next version of the service I've added another stream listening to a
different topic. The service is deployed with a rolling strategy where
first 2 instances of the new version is added and then the old versions 2
instances are shut down.

When trying to deploy my new version the partitions are withdrawn and
re-assigned and during the task creation the NPE happens and KafkaStreams
goes into a failed state.

Kafka is backed by 3 brokers in a cluster.

I've tried to re-create the scenario in a simpler setting but been unable
to do so. The re-balancing works fine when I try to run it locally with
dummy test topics.

I'm attaching the log from the service.

While trying to figure out what was wrong the only conclusion I could come
up with was that KafkaStreams got confused due to building an original
topology and then during re-balance got tasks in another order and then it
did not re-build the internal topology before trying to create tasks, thus
a mismatch between KafkaStreams node groups associated with a task key such
as 3_3 would not match up with the expected consumer/producer-combo.

Hopefully you can shed some lights on what could be wrong.

Regards
Johan Horvius
2019-01-14 08:54:51.855  INFO 1 [           main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$18dc8d9e] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2019-01-14 08:54:53.232  INFO 1 [           main] ConfigServicePropertySourceLocator       : Located environment: name=notification-service, profiles=[test], label=master, version=6fd287b2d6140f58f59ca90519271d5d95d2e1bc, state=null
2019-01-14 08:54:53.382  INFO 1 [           main] NotificationApplication                  : The following profiles are active: test
2019-01-14 08:54:56.808  INFO 1 [           main] RepositoryConfigurationDelegate          : Bootstrapping Spring Data repositories in DEFAULT mode.
2019-01-14 08:54:57.049  INFO 1 [           main] RepositoryConfigurationDelegate          : Finished Spring Data repository scanning in 225ms. Found 9 repository interfaces.
2019-01-14 08:54:58.046  INFO 1 [           main] GenericScope                             : BeanFactory id=7f4306d9-958f-3b0c-8c7e-3d928ee18106
2019-01-14 08:54:58.111  INFO 1 [           main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.kafka.annotation.KafkaBootstrapConfiguration' of type [org.springframework.kafka.annotation.KafkaBootstrapConfiguration$$EnhancerBySpringCGLIB$$d0749c24] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2019-01-14 08:54:59.013  INFO 1 [           main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$fcc28aa1] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2019-01-14 08:54:59.156  INFO 1 [           main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.hateoas.config.HateoasConfiguration' of type [org.springframework.hateoas.config.HateoasConfiguration$$EnhancerBySpringCGLIB$$7c42d7d3] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2019-01-14 08:54:59.207  INFO 1 [           main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$18dc8d9e] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2019-01-14 08:55:00.683  INFO 1 [           main] TomcatWebServer                          : Tomcat initialized with port(s): 8080 (http)
2019-01-14 08:55:00.717  INFO 1 [           main] Http11NioProtocol                        : Initializing ProtocolHandler ["http-nio-8080"]
2019-01-14 08:55:00.754  INFO 1 [           main] StandardService                          : Starting service [Tomcat]
2019-01-14 08:55:00.756  INFO 1 [           main] StandardEngine                           : Starting Servlet Engine: Apache Tomcat/9.0.13
2019-01-14 08:55:00.782  INFO 1 [           main] AprLifecycleListener                     : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib]
2019-01-14 08:55:00.987  INFO 1 [           main] [/]                                      : Initializing Spring embedded WebApplicationContext
2019-01-14 08:55:00.987  INFO 1 [           main] ContextLoader                            : Root WebApplicationContext: initialization completed in 7555 ms
2019-01-14 08:55:02.285 DEBUG 1 [           main] XRayAutoConfiguration                    : Configuring XRay reporter
2019-01-14 08:55:05.524  INFO 1 [           main] HikariDataSource                         : HikariPool-1 - Starting...
2019-01-14 08:55:05.700  INFO 1 [           main] HikariDataSource                         : HikariPool-1 - Start completed.
2019-01-14 08:55:05.907  INFO 1 [           main] LogHelper                                : HHH000204: Processing PersistenceUnitInfo [
	name: default
	...]
2019-01-14 08:55:06.323  INFO 1 [           main] Version                                  : HHH000412: Hibernate Core {5.3.7.Final}
2019-01-14 08:55:06.325  INFO 1 [           main] Environment                              : HHH000206: hibernate.properties not found
2019-01-14 08:55:06.737  INFO 1 [           main] Version                                  : HCANN000001: Hibernate Commons Annotations {5.0.4.Final}
2019-01-14 08:55:07.354  INFO 1 [           main] Dialect                                  : HHH000400: Using dialect: org.hibernate.dialect.MariaDB102Dialect
2019-01-14 08:55:07.750  WARN 1 [           main] JavaTypeDescriptorRegistry               : HHH000481: Encountered Java type [class com.redacted.charging.notification.domain.chargepoint.TriggerData] for which we could not locate a JavaTypeDescriptor and which does not appear to implement equals and/or hashCode.  This can lead to significant performance problems when performing equality/dirty checking involving this Java type.  Consider registering a custom JavaTypeDescriptor or at least implementing equals/hashCode.
2019-01-14 08:55:07.763  WARN 1 [           main] JavaTypeDescriptorRegistry               : HHH000481: Encountered Java type [class com.redacted.charging.notification.domain.chargepoint.TriggerEvent] for which we could not locate a JavaTypeDescriptor and which does not appear to implement equals and/or hashCode.  This can lead to significant performance problems when performing equality/dirty checking involving this Java type.  Consider registering a custom JavaTypeDescriptor or at least implementing equals/hashCode.
2019-01-14 08:55:09.138  INFO 1 [           main] LocalContainerEntityManagerFactoryBean   : Initialized JPA EntityManagerFactory for persistence unit 'default'
2019-01-14 08:55:09.249  INFO 1 [           main] AwsConfig                                : Using default AWS Credentials Provider Chain
2019-01-14 08:55:09.863  INFO 1 [           main] SnsTopicsConfigurationService            : TopicPrefix: test-ns-
2019-01-14 08:55:10.994  INFO 1 [           main] AwsConfig                                : Using default AWS Credentials Provider Chain
2019-01-14 08:55:11.550  INFO 1 [           main] QueryTranslatorFactoryInitiator          : HHH000397: Using ASTQueryTranslatorFactory
2019-01-14 08:55:12.867  INFO 1 [           main] ThreadPoolTaskExecutor                   : Initializing ExecutorService 'publishingExecutor'
2019-01-14 08:55:14.639  WARN 1 [           main] aWebConfiguration$JpaWebMvcConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning
2019-01-14 08:55:15.739  INFO 1 [           main] EndpointLinksResolver                    : Exposing 2 endpoint(s) beneath base path ''
2019-01-14 08:55:16.730  INFO 1 [           main] pertySourcedRequestMappingHandlerMapping : Mapped URL path [/v2/api-docs] onto method [public org.springframework.http.ResponseEntity<springfox.documentation.spring.web.json.Json> springfox.documentation.swagger2.web.Swagger2Controller.getDocumentation(java.lang.String,javax.servlet.http.HttpServletRequest)]
2019-01-14 08:55:20.117  INFO 1 [           main] ThreadPoolTaskScheduler                  : Initializing ExecutorService 'taskScheduler'
2019-01-14 08:55:21.226  INFO 1 [           main] StreamsConfig                            : StreamsConfig values:
	application.id = notification-service
	application.server =
	bootstrap.servers = // Removed as deemed possibly sensitive
	buffered.records.per.partition = 1000
	cache.max.bytes.buffering = 10485760
	client.id =
	commit.interval.ms = 30000
	connections.max.idle.ms = 540000
	default.deserialization.exception.handler = class org.apache.kafka.streams.errors.LogAndFailExceptionHandler
	default.key.serde = class org.apache.kafka.common.serialization.Serdes$StringSerde
	default.production.exception.handler = class org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
	default.timestamp.extractor = class org.apache.kafka.streams.processor.WallclockTimestampExtractor
	default.value.serde = class org.apache.kafka.common.serialization.Serdes$StringSerde
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	num.standby.replicas = 0
	num.stream.threads = 1
	partition.grouper = class org.apache.kafka.streams.processor.DefaultPartitionGrouper
	poll.ms = 100
	processing.guarantee = at_least_once
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	replication.factor = 1
	request.timeout.ms = 40000
	retries = 0
	retry.backoff.ms = 100
	rocksdb.config.setter = null
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	state.cleanup.delay.ms = 600000
	state.dir = /tmp/kafka-streams
	topology.optimization = none
	upgrade.from = null
	windowstore.changelog.additional.retention.ms = 86400000
2019-01-14 08:55:21.279  INFO 1 [           main] AdminClientConfig                        : AdminClientConfig values:
	bootstrap.servers = // Removed as deemed possibly sensitive
	client.id = notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-admin
	connections.max.idle.ms = 300000
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 120000
	retries = 5
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
2019-01-14 08:55:21.447  INFO 1 [           main] AppInfoParser                            : Kafka version : 2.0.1
2019-01-14 08:55:21.448  INFO 1 [           main] AppInfoParser                            : Kafka commitId : fa14705e51bd2ce5
2019-01-14 08:55:21.468  INFO 1 [           main] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] Creating restore consumer client
2019-01-14 08:55:21.481  INFO 1 [           main] ConsumerConfig                           : ConsumerConfig values:
	auto.commit.interval.ms = 5000
	auto.offset.reset = none
	bootstrap.servers = // Removed as deemed possibly sensitive
	check.crcs = true
	client.id = notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1-restore-consumer
	connections.max.idle.ms = 540000
	default.api.timeout.ms = 60000
	enable.auto.commit = false
	exclude.internal.topics = true
	fetch.max.bytes = 52428800
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	group.id =
	heartbeat.interval.ms = 3000
	interceptor.classes = []
	internal.leave.group.on.close = false
	isolation.level = read_uncommitted
	key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
	max.partition.fetch.bytes = 1048576
	max.poll.interval.ms = 2147483647
	max.poll.records = 1000
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	session.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
2019-01-14 08:55:21.731  INFO 1 [           main] AppInfoParser                            : Kafka version : 2.0.1
2019-01-14 08:55:21.734  INFO 1 [           main] AppInfoParser                            : Kafka commitId : fa14705e51bd2ce5
2019-01-14 08:55:21.743  INFO 1 [           main] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] Creating shared producer client
2019-01-14 08:55:21.753  INFO 1 [           main] ProducerConfig                           : ProducerConfig values:
	acks = 1
	batch.size = 16384
	bootstrap.servers = // Removed as deemed possibly sensitive
	buffer.memory = 33554432
	client.id = notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1-producer
	compression.type = none
	connections.max.idle.ms = 540000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
	linger.ms = 100
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retries = 10
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
2019-01-14 08:55:21.836  INFO 1 [           main] AppInfoParser                            : Kafka version : 2.0.1
2019-01-14 08:55:21.836  INFO 1 [           main] AppInfoParser                            : Kafka commitId : fa14705e51bd2ce5
2019-01-14 08:55:21.871  INFO 1 [           main] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] Creating consumer client
2019-01-14 08:55:21.882  INFO 1 [           main] AdminClientConfig                        : AdminClientConfig values:
	bootstrap.servers = // Removed as deemed possibly sensitive
	client.id =
	connections.max.idle.ms = 300000
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 120000
	retries = 5
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
2019-01-14 08:55:21.883  INFO 1 [           main] ConsumerConfig                           : ConsumerConfig values:
	auto.commit.interval.ms = 5000
	auto.offset.reset = latest
	bootstrap.servers = // Removed as deemed possibly sensitive
	check.crcs = true
	client.id = notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1-consumer
	connections.max.idle.ms = 540000
	default.api.timeout.ms = 60000
	enable.auto.commit = false
	exclude.internal.topics = true
	fetch.max.bytes = 52428800
	fetch.max.wait.ms = 500
	fetch.min.bytes = 1
	group.id = notification-service
	heartbeat.interval.ms = 3000
	interceptor.classes = []
	internal.leave.group.on.close = false
	isolation.level = read_uncommitted
	key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
	max.partition.fetch.bytes = 1048576
	max.poll.interval.ms = 2147483647
	max.poll.records = 1000
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partition.assignment.strategy = [org.apache.kafka.streams.processor.internals.StreamsPartitionAssignor]
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 30000
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	session.timeout.ms = 10000
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	value.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
2019-01-14 08:55:21.936  INFO 1 [           main] StreamsConfig                            : StreamsConfig values:
	application.id = notification-service
	application.server =
	bootstrap.servers = // Removed as deemed possibly sensitive
	buffered.records.per.partition = 1000
	cache.max.bytes.buffering = 10485760
	client.id = notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1-consumer
	commit.interval.ms = 30000
	connections.max.idle.ms = 540000
	default.deserialization.exception.handler = class org.apache.kafka.streams.errors.LogAndFailExceptionHandler
	default.key.serde = class org.apache.kafka.common.serialization.Serdes$ByteArraySerde
	default.production.exception.handler = class org.apache.kafka.streams.errors.DefaultProductionExceptionHandler
	default.timestamp.extractor = class org.apache.kafka.streams.processor.FailOnInvalidTimestamp
	default.value.serde = class org.apache.kafka.common.serialization.Serdes$ByteArraySerde
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	num.standby.replicas = 0
	num.stream.threads = 1
	partition.grouper = class org.apache.kafka.streams.processor.DefaultPartitionGrouper
	poll.ms = 100
	processing.guarantee = at_least_once
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	replication.factor = 1
	request.timeout.ms = 40000
	retries = 0
	retry.backoff.ms = 100
	rocksdb.config.setter = null
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	state.cleanup.delay.ms = 600000
	state.dir = /tmp/kafka-streams
	topology.optimization = none
	upgrade.from = null
	windowstore.changelog.additional.retention.ms = 86400000
2019-01-14 08:55:21.941  INFO 1 [           main] AdminClientConfig                        : AdminClientConfig values:
	bootstrap.servers = // Removed as deemed possibly sensitive
	client.id = dummy-admin
	connections.max.idle.ms = 300000
	metadata.max.age.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	receive.buffer.bytes = 65536
	reconnect.backoff.max.ms = 1000
	reconnect.backoff.ms = 50
	request.timeout.ms = 120000
	retries = 5
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLS
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
2019-01-14 08:55:21.956  WARN 1 [           main] ConsumerConfig                           : The configuration 'admin.retries' was supplied but isn't a known config.
2019-01-14 08:55:21.956  INFO 1 [           main] AppInfoParser                            : Kafka version : 2.0.1
2019-01-14 08:55:21.956  INFO 1 [           main] AppInfoParser                            : Kafka commitId : fa14705e51bd2ce5
2019-01-14 08:55:21.977  INFO 1 [-StreamThread-1] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] Starting
2019-01-14 08:55:21.980  INFO 1 [-StreamThread-1] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] State transition from CREATED to RUNNING
2019-01-14 08:55:21.981  INFO 1 [           main] KafkaStreams                             : stream-client [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac] Started Streams client
2019-01-14 08:55:21.981  INFO 1 [           main] DocumentationPluginsBootstrapper         : Context refreshed
2019-01-14 08:55:22.067  INFO 1 [           main] DocumentationPluginsBootstrapper         : Found 1 custom documentation plugin(s)
2019-01-14 08:55:22.073  INFO 1 [-StreamThread-1] Metadata                                 : Cluster ID: p5_B-DLzQ360W_XUxGrCjQ
2019-01-14 08:55:22.074  INFO 1 [-StreamThread-1] AbstractCoordinator                      : [Consumer clientId=notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1-consumer, groupId=notification-service] Discovered group coordinator <redacted>.com:9092 (id: 2147482644 rack: null)
2019-01-14 08:55:22.106  INFO 1 [-StreamThread-1] ConsumerCoordinator                      : [Consumer clientId=notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1-consumer, groupId=notification-service] Revoking previously assigned partitions []
2019-01-14 08:55:22.106  INFO 1 [-StreamThread-1] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] State transition from RUNNING to PARTITIONS_REVOKED
2019-01-14 08:55:22.107  INFO 1 [-StreamThread-1] KafkaStreams                             : stream-client [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac] State transition from RUNNING to REBALANCING
2019-01-14 08:55:22.107  INFO 1 [-StreamThread-1] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] partition revocation took 0 ms.
	suspended active tasks: []
	suspended standby tasks: []
2019-01-14 08:55:22.107  INFO 1 [-StreamThread-1] AbstractCoordinator                      : [Consumer clientId=notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1-consumer, groupId=notification-service] (Re-)joining group
2019-01-14 08:55:22.279  INFO 1 [           main] ApiListingReferenceScanner               : Scanning for api listing references
2019-01-14 08:55:22.952  INFO 1 [-StreamThread-1] AbstractCoordinator                      : [Consumer clientId=notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1-consumer, groupId=notification-service] Successfully joined group with generation 75
2019-01-14 08:55:22.969  INFO 1 [-StreamThread-1] ConsumerCoordinator                      : [Consumer clientId=notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1-consumer, groupId=notification-service] Setting newly assigned partitions [charging-operations-service-connector-status-events-9, charging-operations-service-charge-point-online-offline-events-7, charging-operations-service-charge-point-online-offline-events-6, charging-operations-service-charge-point-online-offline-events-9, charging-operations-service-charge-point-online-offline-events-8, asset-service-chargepoints-9, asset-service-chargepoints-8, asset-service-chargepoints-7, asset-service-chargepoints-6, charging-operations-service-charge-point-error-events-6, charging-operations-service-connector-status-events-7, charging-operations-service-connector-status-events-8, charging-operations-service-connector-status-events-5, charging-operations-service-connector-status-events-6, asset-service-chargepoints-5, asset-service-chargepoints-4, asset-service-chargepoints-3]
2019-01-14 08:55:22.969  INFO 1 [-StreamThread-1] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] State transition from PARTITIONS_REVOKED to PARTITIONS_ASSIGNED
2019-01-14 08:55:23.076 ERROR 1 [-StreamThread-1] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] Error caught during partition assignment, will abort the current process and re-throw at the end of rebalance: {}
java.lang.NullPointerException: null
	at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:205) ~[kafka-streams-2.0.1.jar!/:?]
	at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:151) ~[kafka-streams-2.0.1.jar!/:?]
	at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:428) ~[kafka-streams-2.0.1.jar!/:?]
	at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:388) ~[kafka-streams-2.0.1.jar!/:?]
	at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:373) ~[kafka-streams-2.0.1.jar!/:?]
	at org.apache.kafka.streams.processor.internals.TaskManager.addStreamTasks(TaskManager.java:148) ~[kafka-streams-2.0.1.jar!/:?]
	at org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:107) ~[kafka-streams-2.0.1.jar!/:?]
	at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:270) ~[kafka-streams-2.0.1.jar!/:?]
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:283) ~[kafka-clients-2.0.1.jar!/:?]
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:422) ~[kafka-clients-2.0.1.jar!/:?]
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352) ~[kafka-clients-2.0.1.jar!/:?]
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337) ~[kafka-clients-2.0.1.jar!/:?]
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:343) ~[kafka-clients-2.0.1.jar!/:?]
	at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218) ~[kafka-clients-2.0.1.jar!/:?]
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175) ~[kafka-clients-2.0.1.jar!/:?]
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154) ~[kafka-clients-2.0.1.jar!/:?]
	at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861) ~[kafka-streams-2.0.1.jar!/:?]
	at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:810) ~[kafka-streams-2.0.1.jar!/:?]
	at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767) ~[kafka-streams-2.0.1.jar!/:?]
	at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736) ~[kafka-streams-2.0.1.jar!/:?]
2019-01-14 08:55:23.076  INFO 1 [-StreamThread-1] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] partition assignment took 107 ms.
	current active tasks: []
	current standby tasks: []
	previous active tasks: []
2019-01-14 08:55:23.094  INFO 1 [-StreamThread-1] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] State transition from PARTITIONS_ASSIGNED to PENDING_SHUTDOWN
2019-01-14 08:55:23.094  INFO 1 [-StreamThread-1] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] Shutting down
2019-01-14 08:55:23.094  INFO 1 [-StreamThread-1] KafkaProducer                            : [Producer clientId=notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1-producer] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
2019-01-14 08:55:23.133  INFO 1 [-StreamThread-1] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] State transition from PENDING_SHUTDOWN to DEAD
Exception in thread "notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] Failed to rebalance.
	at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:870)
	at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:810)
	at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
	at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
2019-01-14 08:55:23.134  INFO 1 [-StreamThread-1] KafkaStreams                             : stream-client [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac] State transition from REBALANCING to ERROR
Caused by: java.lang.NullPointerException
	at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:205)
	at org.apache.kafka.streams.processor.internals.StreamTask.<init>(StreamTask.java:151)
	at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:428)
	at org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:388)
	at org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:373)
	at org.apache.kafka.streams.processor.internals.TaskManager.addStreamTasks(TaskManager.java:148)
	at org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:107)
	at org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:270)
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:283)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:422)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:352)
	at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:337)
	at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:343)
	at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1218)
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1175)
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154)
	at org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:861)
	... 3 more
2019-01-14 08:55:23.134  WARN 1 [-StreamThread-1] KafkaStreams                             : stream-client [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac] All stream threads have died. The instance will be in error state and should be closed.
2019-01-14 08:55:23.134  INFO 1 [-StreamThread-1] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] Shutdown complete
2019-01-14 08:55:23.212  INFO 1 [           main] Http11NioProtocol                        : Starting ProtocolHandler ["http-nio-8080"]
2019-01-14 08:55:23.216  INFO 1 [           main] NioSelectorPool                          : Using a shared selector for servlet write/read
2019-01-14 08:55:23.275  INFO 1 [           main] TomcatWebServer                          : Tomcat started on port(s): 8080 (http) with context path ''
2019-01-14 08:55:23.286  INFO 1 [           main] NotificationApplication                  : Started NotificationApplication in 33.961 seconds (JVM running for 36.798)
2019-01-14 08:55:23.743  INFO 1 [           main] StartupAwsSetup                          : Created topic test-ns-chargepoints-cud arn:aws:sns:eu-west-1:217052405714:test-ns-chargepoints-cud
2019-01-14 08:55:23.795  INFO 1 [           main] StartupAwsSetup                          : Created topic test-ns-chargepoints-status arn:aws:sns:eu-west-1:217052405714:test-ns-chargepoints-status
2019-01-14 08:55:23.829  INFO 1 [           main] StartupAwsSetup                          : Created topic test-ns-connectors-status arn:aws:sns:eu-west-1:217052405714:test-ns-connectors-status
2019-01-14 08:55:27.147  INFO 1 [nio-8080-exec-1] [/]                                      : Initializing Spring DispatcherServlet 'dispatcherServlet'
2019-01-14 08:55:27.147  INFO 1 [nio-8080-exec-1] DispatcherServlet                        : Initializing Servlet 'dispatcherServlet'
2019-01-14 08:55:27.212  INFO 1 [nio-8080-exec-1] DispatcherServlet                        : Completed initialization in 65 ms
2019-01-14 08:55:27.664 ERROR 1 [nio-8080-exec-1] KafkaStreamHealthIndicator               : KafkaStreams in state ERROR during health check, health response will be DOWN!
2019-01-14 08:55:27.664  INFO 1 [nio-8080-exec-1] KafkaStreams                             : stream-client [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac] State transition from ERROR to PENDING_SHUTDOWN
2019-01-14 08:55:27.668  INFO 1 [ms-close-thread] StreamThread                             : stream-thread [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac-StreamThread-1] Informed to shut down
2019-01-14 08:55:27.674  INFO 1 [ms-close-thread] KafkaStreams                             : stream-client [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac] State transition from PENDING_SHUTDOWN to NOT_RUNNING
2019-01-14 08:55:27.677  INFO 1 [nio-8080-exec-1] KafkaStreams                             : stream-client [notification-service-5f9bd358-e346-4a46-9ecd-0da69b27c8ac] Streams client stopped completely
2019-01-14 08:55:27.683  INFO 1 [nio-8080-exec-1] StateDirectory                           : stream-thread [http-nio-8080-exec-1] Deleting state directory 3_3 for task 3_3 as user calling cleanup.
2019-01-14 08:55:27.690  INFO 1 [nio-8080-exec-1] StateDirectory                           : stream-thread [http-nio-8080-exec-1] Deleting state directory 0_6 for task 0_6 as user calling cleanup.
2019-01-14 08:55:32.807 ERROR 1 [nio-8080-exec-2] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:55:37.884 ERROR 1 [nio-8080-exec-3] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:55:42.952 ERROR 1 [nio-8080-exec-4] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:55:47.997 ERROR 1 [nio-8080-exec-5] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:55:53.060 ERROR 1 [nio-8080-exec-6] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:55:58.108 ERROR 1 [nio-8080-exec-7] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:56:03.132 ERROR 1 [nio-8080-exec-8] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:56:08.170 ERROR 1 [nio-8080-exec-9] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:56:13.194 ERROR 1 [io-8080-exec-10] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:56:18.232 ERROR 1 [nio-8080-exec-1] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:56:23.256 ERROR 1 [nio-8080-exec-2] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:56:28.291 ERROR 1 [nio-8080-exec-3] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:56:33.346 ERROR 1 [nio-8080-exec-4] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:56:38.402 ERROR 1 [nio-8080-exec-5] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:56:43.425 ERROR 1 [nio-8080-exec-6] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:56:48.495 ERROR 1 [nio-8080-exec-7] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:56:53.546 ERROR 1 [nio-8080-exec-8] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:56:58.578 ERROR 1 [nio-8080-exec-9] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:57:03.600 ERROR 1 [io-8080-exec-10] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:57:08.632 ERROR 1 [nio-8080-exec-1] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:57:13.655 ERROR 1 [nio-8080-exec-2] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:57:18.687 ERROR 1 [nio-8080-exec-3] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:57:23.708 ERROR 1 [nio-8080-exec-4] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:57:28.774 ERROR 1 [nio-8080-exec-5] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:57:33.827 ERROR 1 [nio-8080-exec-6] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:57:38.859 ERROR 1 [nio-8080-exec-7] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:57:43.881 ERROR 1 [nio-8080-exec-8] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:57:48.948 ERROR 1 [nio-8080-exec-9] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:57:53.984 ERROR 1 [io-8080-exec-10] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:57:59.020 ERROR 1 [nio-8080-exec-1] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:58:04.070 ERROR 1 [nio-8080-exec-2] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:58:09.106 ERROR 1 [nio-8080-exec-3] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:58:14.135 ERROR 1 [nio-8080-exec-4] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:58:19.172 ERROR 1 [nio-8080-exec-5] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:58:24.209 ERROR 1 [nio-8080-exec-6] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:58:29.258 ERROR 1 [nio-8080-exec-7] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:58:34.280 ERROR 1 [nio-8080-exec-8] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:58:39.332 ERROR 1 [nio-8080-exec-9] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:58:44.363 ERROR 1 [io-8080-exec-10] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:58:49.396 ERROR 1 [nio-8080-exec-1] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:58:54.439 ERROR 1 [nio-8080-exec-2] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:58:59.477 ERROR 1 [nio-8080-exec-3] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:59:04.505 ERROR 1 [nio-8080-exec-4] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:59:09.537 ERROR 1 [nio-8080-exec-5] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:59:14.617 ERROR 1 [nio-8080-exec-6] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:59:19.672 ERROR 1 [nio-8080-exec-7] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:59:24.696 ERROR 1 [nio-8080-exec-8] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:59:29.727 ERROR 1 [nio-8080-exec-9] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:59:34.745 ERROR 1 [io-8080-exec-10] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:59:39.774 ERROR 1 [nio-8080-exec-1] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:59:44.801 ERROR 1 [nio-8080-exec-2] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:59:49.886 ERROR 1 [nio-8080-exec-3] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:59:54.915 ERROR 1 [nio-8080-exec-4] KafkaStreamHealthIndicator               : No KafkaStreams during health check, health response will be DOWN!
2019-01-14 08:59:55.583  INFO 1 [       Thread-8] ThreadPoolTaskScheduler                  : Shutting down ExecutorService 'taskScheduler'
2019-01-14 08:59:55.593  INFO 1 [       Thread-8] ThreadPoolTaskExecutor                   : Shutting down ExecutorService 'publishingExecutor'
2019-01-14 08:59:55.602  INFO 1 [       Thread-8] LocalContainerEntityManagerFactoryBean   : Closing JPA EntityManagerFactory for persistence unit 'default'
2019-01-14 08:59:55.611  INFO 1 [       Thread-8] HikariDataSource                         : HikariPool-1 - Shutdown initiated...
2019-01-14 08:59:55.626  INFO 1 [       Thread-8] HikariDataSource                         : HikariPool-1 - Shutdown completed.

Reply via email to