Hi,

And on a side note, it's logged _many_ times. I had to suppress some
logging at package level :-/
Anybody else experiencing the same ?

Cheers,
Francesco

On 20 February 2017 at 00:04, Simon Teles <ste...@isi.nc> wrote:

> Hello,
>
> I'm curious to know why, when the producer/consumer are creating, the
> ProducerConfig, ConsumerConfig are logged twice ? Is that normal ?
>
> Example :
>
> 10:52:08.963 INFO  [o.a.k.s.p.i.StreamThread|<init>|l.170] ~~ Creating
> producer client for stream thread [StreamThread-1]
> 10:52:08.969 INFO  [o.a.k.c.p.ProducerConfig|logAll|l.178] ~~
> ProducerConfig values:
>     metric.reporters = []
>     metadata.max.age.ms = 300000
>     reconnect.backoff.ms = 50
>     sasl.kerberos.ticket.renew.window.factor = 0.8
>     bootstrap.servers = [kafka:9092]
>     ssl.keystore.type = JKS
>     sasl.mechanism = GSSAPI
>     max.block.ms = 60000
>     interceptor.classes = null
>     ssl.truststore.password = null
>     client.id = test-stream-2-StreamThread-1-producer
>     ssl.endpoint.identification.algorithm = null
>     request.timeout.ms = 30000
>     acks = 1
>     receive.buffer.bytes = 32768
>     ssl.truststore.type = JKS
>     retries = 0
>     ssl.truststore.location = null
>     ssl.keystore.password = null
>     send.buffer.bytes = 131072
>     compression.type = none
>     metadata.fetch.timeout.ms = 60000
>     retry.backoff.ms = 100
>     sasl.kerberos.kinit.cmd = /usr/bin/kinit
>     buffer.memory = 33554432
>     timeout.ms = 30000
>     key.serializer = class org.apache.kafka.common.serial
> ization.ByteArraySerializer
>     sasl.kerberos.service.name = null
>     sasl.kerberos.ticket.renew.jitter = 0.05
>     ssl.trustmanager.algorithm = PKIX
>     block.on.buffer.full = false
>     ssl.key.password = null
>     sasl.kerberos.min.time.before.relogin = 60000
>     connections.max.idle.ms = 540000
>     max.in.flight.requests.per.connection = 5
>     metrics.num.samples = 2
>     ssl.protocol = TLS
>     ssl.provider = null
>     ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>     batch.size = 16384
>     ssl.keystore.location = null
>     ssl.cipher.suites = null
>     security.protocol = PLAINTEXT
>     max.request.size = 1048576
>     value.serializer = class org.apache.kafka.common.serial
> ization.ByteArraySerializer
>     ssl.keymanager.algorithm = SunX509
>     metrics.sample.window.ms = 30000
>     partitioner.class = class org.apache.kafka.clients.produ
> cer.internals.DefaultPartitioner
>     linger.ms = 100
>
> 10:52:08.996 INFO  [o.a.k.c.p.ProducerConfig|logAll|l.178] ~~
> ProducerConfig values:
>     metric.reporters = []
>     metadata.max.age.ms = 300000
>     reconnect.backoff.ms = 50
>     sasl.kerberos.ticket.renew.window.factor = 0.8
>     bootstrap.servers = [kafka:9092]
>     ssl.keystore.type = JKS
>     sasl.mechanism = GSSAPI
>     max.block.ms = 60000
>     interceptor.classes = null
>     ssl.truststore.password = null
>     client.id = test-stream-2-StreamThread-1-producer
>     ssl.endpoint.identification.algorithm = null
>     request.timeout.ms = 30000
>     acks = 1
>     receive.buffer.bytes = 32768
>     ssl.truststore.type = JKS
>     retries = 0
>     ssl.truststore.location = null
>     ssl.keystore.password = null
>     send.buffer.bytes = 131072
>     compression.type = none
>     metadata.fetch.timeout.ms = 60000
>     retry.backoff.ms = 100
>     sasl.kerberos.kinit.cmd = /usr/bin/kinit
>     buffer.memory = 33554432
>     timeout.ms = 30000
>     key.serializer = class org.apache.kafka.common.serial
> ization.ByteArraySerializer
>     sasl.kerberos.service.name = null
>     sasl.kerberos.ticket.renew.jitter = 0.05
>     ssl.trustmanager.algorithm = PKIX
>     block.on.buffer.full = false
>     ssl.key.password = null
>     sasl.kerberos.min.time.before.relogin = 60000
>     connections.max.idle.ms = 540000
>     max.in.flight.requests.per.connection = 5
>     metrics.num.samples = 2
>     ssl.protocol = TLS
>     ssl.provider = null
>     ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>     batch.size = 16384
>     ssl.keystore.location = null
>     ssl.cipher.suites = null
>     security.protocol = PLAINTEXT
>     max.request.size = 1048576
>     value.serializer = class org.apache.kafka.common.serial
> ization.ByteArraySerializer
>     ssl.keymanager.algorithm = SunX509
>     metrics.sample.window.ms = 30000
>     partitioner.class = class org.apache.kafka.clients.produ
> cer.internals.DefaultPartitioner
>     linger.ms = 100
>
> It's the same for ConsumerConfig and for "Creating restore consumer
> client".
>
> Thanks,
>
> Simon
>
>


-- 
<http://www.openbet.com/> Francesco laTorre
Senior Developer
T: +44 208 742 1600
+44 203 249 8394

E: francesco.lato...@openbet.com
W: www.openbet.com
OpenBet Ltd
Chiswick Park Building 9
566 Chiswick High Rd
London
W4 5XT
<https://www.openbet.com/email_promo>
This message is confidential and intended only for the addressee. If you
have received this message in error, please immediately notify the
postmas...@openbet.com and delete it from your system as well as any
copies. The content of e-mails as well as traffic data may be monitored by
OpenBet for employment and security purposes. To protect the environment
please do not print this e-mail unless necessary. OpenBet Ltd. Registered
Office: Chiswick Park Building 9, 566 Chiswick High Road, London, W4 5XT,
United Kingdom. A company registered in England and Wales. Registered no.
3134634. VAT no. GB927523612

Reply via email to