Re: Reduce Kafka Client logging

2017-09-08 Thread Raghav
Thanks, Kamal.

On Fri, Sep 8, 2017 at 4:10 AM, Kamal Chandraprakash <
kamal.chandraprak...@gmail.com> wrote:

> add this lines at the end of your log4j.properties,
>
> log4j.logger.org.apache.kafka.clients.producer=WARN
>
> On Thu, Sep 7, 2017 at 5:27 PM, Raghav  wrote:
>
> > Hi Viktor
> >
> > Can you pleas share the log4j config snippet that I should use. My Java
> > code's current log4j looks like this. How should I add this new entry
> that
> > you mentioned ? Thanks.
> >
> >
> > log4j.rootLogger=INFO, STDOUT
> >
> > log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender
> > log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout
> > log4j.appender.STDOUT.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p
> > %c{2}:%L %m%n
> >
> > log4j.appender.file=org.apache.log4j.RollingFileAppender
> > log4j.appender.file.File=logfile.log
> > log4j.appender.file.layout=org.apache.log4j.PatternLayout
> > log4j.appender.file.layout.ConversionPattern=%d{dd-MM- HH:mm:ss}
> %-5p
> > %c{1}:%L - %m%n
> >
> > On Thu, Sep 7, 2017 at 2:34 AM, Viktor Somogyi 
> > wrote:
> >
> > > Hi Raghav,
> > >
> > > I think it is enough to raise the logging level
> > > of org.apache.kafka.clients.producer.ProducerConfig to WARN in log4j.
> > > Also I'd like to mention that if possible, don't recreate the Kafka
> > > producer each time. The protocol is designed for long-living
> connections
> > > and recreating the connection each time puts pressure on the TCP layer
> > (the
> > > connection is expensive) and also on Kafka as well which may result in
> > > broker failures (typically exceeding the maximum allowed number of file
> > > descriptors).
> > >
> > > HTH,
> > > Viktor
> > >
> > > On Thu, Sep 7, 2017 at 7:35 AM, Raghav  wrote:
> > >
> > > > Due to the nature of code, I have to open a connection to a different
> > > Kafka
> > > > broker each time, and send one message. We have several Kafka
> brokers.
> > So
> > > > my client log is full with the following logs. What log settings
> > should I
> > > > use in log4j just for Kafka producer logs ?
> > > >
> > > >
> > > > 17/09/07 04:44:04 INFO producer.ProducerConfig:180 ProducerConfig
> > values:
> > > > acks = all
> > > > batch.size = 16384
> > > > block.on.buffer.full = false
> > > > bootstrap.servers = [10.10.10.5:]
> > > > buffer.memory = 33554432
> > > > client.id =
> > > > compression.type = none
> > > > connections.max.idle.ms = 54
> > > > interceptor.classes = null
> > > > key.serializer = class
> > > > org.apache.kafka.common.serialization.StringSerializer
> > > > linger.ms = 1
> > > > max.block.ms = 5000
> > > > max.in.flight.requests.per.connection = 5
> > > > max.request.size = 1048576
> > > > metadata.fetch.timeout.ms = 6
> > > > metadata.max.age.ms = 30
> > > > metric.reporters = []
> > > > metrics.num.samples = 2
> > > > metrics.sample.window.ms = 3
> > > > partitioner.class = class
> > > > org.apache.kafka.clients.producer.internals.DefaultPartitioner
> > > > receive.buffer.bytes = 32768
> > > > reconnect.backoff.ms = 50
> > > > request.timeout.ms = 5000
> > > > retries = 0
> > > > retry.backoff.ms = 100
> > > > sasl.kerberos.kinit.cmd = /usr/bin/kinit
> > > > sasl.kerberos.min.time.before.relogin = 6
> > > > sasl.kerberos.service.name = null
> > > > sasl.kerberos.ticket.renew.jitter = 0.05
> > > > sasl.kerberos.ticket.renew.window.factor = 0.8
> > > > sasl.mechanism = GSSAPI
> > > > security.protocol = PLAINTEXT
> > > > send.buffer.bytes = 131072
> > > > ssl.cipher.suites = null
> > > > ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> > > > ssl.endpoint.identification.algorithm = null
> > > > ssl.key.password = null
> > > > ssl.keymanager.algorithm = SunX509
> > > > ssl.keystore.location = null
> > > > ssl.keystore.password = null
> > > > ssl.keystore.type = JKS
> > > > ssl.protocol = TLS
> > > > ssl.provider = null
> > > > ssl.secure.random.implementation = null
> > > > ssl.trustmanager.algorithm = PKIX
> > > > ssl.truststore.location = null
> > > > ssl.truststore.password = null
> > > > ssl.truststore.type = JKS
> > > > timeout.ms = 3
> > > > value.serializer = class
> > > > org.apache.kafka.common.serialization.StringSerializer
> > > >
> > > > On Wed, Sep 6, 2017 at 9:37 PM, Jaikiran Pai <
> jai.forums2...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > Can you post the exact log messages that you are seeing?
> > > > >
> > > > > -Jaikiran
> > > > >
> > > > >
> > > > >
> > > > > On 07/09/17 7:55 AM, Raghav wrote:
> > > > >
> > > > >> Hi
> > > > >>
> > > > >> My Java code produces Kafka config overtime it does a send which
> > makes
> > > > l

Re: Reduce Kafka Client logging

2017-09-08 Thread Kamal Chandraprakash
add this lines at the end of your log4j.properties,

log4j.logger.org.apache.kafka.clients.producer=WARN

On Thu, Sep 7, 2017 at 5:27 PM, Raghav  wrote:

> Hi Viktor
>
> Can you pleas share the log4j config snippet that I should use. My Java
> code's current log4j looks like this. How should I add this new entry that
> you mentioned ? Thanks.
>
>
> log4j.rootLogger=INFO, STDOUT
>
> log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender
> log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout
> log4j.appender.STDOUT.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p
> %c{2}:%L %m%n
>
> log4j.appender.file=org.apache.log4j.RollingFileAppender
> log4j.appender.file.File=logfile.log
> log4j.appender.file.layout=org.apache.log4j.PatternLayout
> log4j.appender.file.layout.ConversionPattern=%d{dd-MM- HH:mm:ss} %-5p
> %c{1}:%L - %m%n
>
> On Thu, Sep 7, 2017 at 2:34 AM, Viktor Somogyi 
> wrote:
>
> > Hi Raghav,
> >
> > I think it is enough to raise the logging level
> > of org.apache.kafka.clients.producer.ProducerConfig to WARN in log4j.
> > Also I'd like to mention that if possible, don't recreate the Kafka
> > producer each time. The protocol is designed for long-living connections
> > and recreating the connection each time puts pressure on the TCP layer
> (the
> > connection is expensive) and also on Kafka as well which may result in
> > broker failures (typically exceeding the maximum allowed number of file
> > descriptors).
> >
> > HTH,
> > Viktor
> >
> > On Thu, Sep 7, 2017 at 7:35 AM, Raghav  wrote:
> >
> > > Due to the nature of code, I have to open a connection to a different
> > Kafka
> > > broker each time, and send one message. We have several Kafka brokers.
> So
> > > my client log is full with the following logs. What log settings
> should I
> > > use in log4j just for Kafka producer logs ?
> > >
> > >
> > > 17/09/07 04:44:04 INFO producer.ProducerConfig:180 ProducerConfig
> values:
> > > acks = all
> > > batch.size = 16384
> > > block.on.buffer.full = false
> > > bootstrap.servers = [10.10.10.5:]
> > > buffer.memory = 33554432
> > > client.id =
> > > compression.type = none
> > > connections.max.idle.ms = 54
> > > interceptor.classes = null
> > > key.serializer = class
> > > org.apache.kafka.common.serialization.StringSerializer
> > > linger.ms = 1
> > > max.block.ms = 5000
> > > max.in.flight.requests.per.connection = 5
> > > max.request.size = 1048576
> > > metadata.fetch.timeout.ms = 6
> > > metadata.max.age.ms = 30
> > > metric.reporters = []
> > > metrics.num.samples = 2
> > > metrics.sample.window.ms = 3
> > > partitioner.class = class
> > > org.apache.kafka.clients.producer.internals.DefaultPartitioner
> > > receive.buffer.bytes = 32768
> > > reconnect.backoff.ms = 50
> > > request.timeout.ms = 5000
> > > retries = 0
> > > retry.backoff.ms = 100
> > > sasl.kerberos.kinit.cmd = /usr/bin/kinit
> > > sasl.kerberos.min.time.before.relogin = 6
> > > sasl.kerberos.service.name = null
> > > sasl.kerberos.ticket.renew.jitter = 0.05
> > > sasl.kerberos.ticket.renew.window.factor = 0.8
> > > sasl.mechanism = GSSAPI
> > > security.protocol = PLAINTEXT
> > > send.buffer.bytes = 131072
> > > ssl.cipher.suites = null
> > > ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> > > ssl.endpoint.identification.algorithm = null
> > > ssl.key.password = null
> > > ssl.keymanager.algorithm = SunX509
> > > ssl.keystore.location = null
> > > ssl.keystore.password = null
> > > ssl.keystore.type = JKS
> > > ssl.protocol = TLS
> > > ssl.provider = null
> > > ssl.secure.random.implementation = null
> > > ssl.trustmanager.algorithm = PKIX
> > > ssl.truststore.location = null
> > > ssl.truststore.password = null
> > > ssl.truststore.type = JKS
> > > timeout.ms = 3
> > > value.serializer = class
> > > org.apache.kafka.common.serialization.StringSerializer
> > >
> > > On Wed, Sep 6, 2017 at 9:37 PM, Jaikiran Pai  >
> > > wrote:
> > >
> > > > Can you post the exact log messages that you are seeing?
> > > >
> > > > -Jaikiran
> > > >
> > > >
> > > >
> > > > On 07/09/17 7:55 AM, Raghav wrote:
> > > >
> > > >> Hi
> > > >>
> > > >> My Java code produces Kafka config overtime it does a send which
> makes
> > > log
> > > >> very very verbose.
> > > >>
> > > >> How can I reduce the Kafka client (producer) logging in my java
> code ?
> > > >>
> > > >> Thanks for your help.
> > > >>
> > > >>
> > > >
> > >
> > >
> > > --
> > > Raghav
> > >
> >
>
>
>
> --
> Raghav
>


Re: Reduce Kafka Client logging

2017-09-07 Thread Raghav
Hi Viktor

Can you pleas share the log4j config snippet that I should use. My Java
code's current log4j looks like this. How should I add this new entry that
you mentioned ? Thanks.


log4j.rootLogger=INFO, STDOUT

log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender
log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout
log4j.appender.STDOUT.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p
%c{2}:%L %m%n

log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=logfile.log
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{dd-MM- HH:mm:ss} %-5p
%c{1}:%L - %m%n

On Thu, Sep 7, 2017 at 2:34 AM, Viktor Somogyi 
wrote:

> Hi Raghav,
>
> I think it is enough to raise the logging level
> of org.apache.kafka.clients.producer.ProducerConfig to WARN in log4j.
> Also I'd like to mention that if possible, don't recreate the Kafka
> producer each time. The protocol is designed for long-living connections
> and recreating the connection each time puts pressure on the TCP layer (the
> connection is expensive) and also on Kafka as well which may result in
> broker failures (typically exceeding the maximum allowed number of file
> descriptors).
>
> HTH,
> Viktor
>
> On Thu, Sep 7, 2017 at 7:35 AM, Raghav  wrote:
>
> > Due to the nature of code, I have to open a connection to a different
> Kafka
> > broker each time, and send one message. We have several Kafka brokers. So
> > my client log is full with the following logs. What log settings should I
> > use in log4j just for Kafka producer logs ?
> >
> >
> > 17/09/07 04:44:04 INFO producer.ProducerConfig:180 ProducerConfig values:
> > acks = all
> > batch.size = 16384
> > block.on.buffer.full = false
> > bootstrap.servers = [10.10.10.5:]
> > buffer.memory = 33554432
> > client.id =
> > compression.type = none
> > connections.max.idle.ms = 54
> > interceptor.classes = null
> > key.serializer = class
> > org.apache.kafka.common.serialization.StringSerializer
> > linger.ms = 1
> > max.block.ms = 5000
> > max.in.flight.requests.per.connection = 5
> > max.request.size = 1048576
> > metadata.fetch.timeout.ms = 6
> > metadata.max.age.ms = 30
> > metric.reporters = []
> > metrics.num.samples = 2
> > metrics.sample.window.ms = 3
> > partitioner.class = class
> > org.apache.kafka.clients.producer.internals.DefaultPartitioner
> > receive.buffer.bytes = 32768
> > reconnect.backoff.ms = 50
> > request.timeout.ms = 5000
> > retries = 0
> > retry.backoff.ms = 100
> > sasl.kerberos.kinit.cmd = /usr/bin/kinit
> > sasl.kerberos.min.time.before.relogin = 6
> > sasl.kerberos.service.name = null
> > sasl.kerberos.ticket.renew.jitter = 0.05
> > sasl.kerberos.ticket.renew.window.factor = 0.8
> > sasl.mechanism = GSSAPI
> > security.protocol = PLAINTEXT
> > send.buffer.bytes = 131072
> > ssl.cipher.suites = null
> > ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> > ssl.endpoint.identification.algorithm = null
> > ssl.key.password = null
> > ssl.keymanager.algorithm = SunX509
> > ssl.keystore.location = null
> > ssl.keystore.password = null
> > ssl.keystore.type = JKS
> > ssl.protocol = TLS
> > ssl.provider = null
> > ssl.secure.random.implementation = null
> > ssl.trustmanager.algorithm = PKIX
> > ssl.truststore.location = null
> > ssl.truststore.password = null
> > ssl.truststore.type = JKS
> > timeout.ms = 3
> > value.serializer = class
> > org.apache.kafka.common.serialization.StringSerializer
> >
> > On Wed, Sep 6, 2017 at 9:37 PM, Jaikiran Pai 
> > wrote:
> >
> > > Can you post the exact log messages that you are seeing?
> > >
> > > -Jaikiran
> > >
> > >
> > >
> > > On 07/09/17 7:55 AM, Raghav wrote:
> > >
> > >> Hi
> > >>
> > >> My Java code produces Kafka config overtime it does a send which makes
> > log
> > >> very very verbose.
> > >>
> > >> How can I reduce the Kafka client (producer) logging in my java code ?
> > >>
> > >> Thanks for your help.
> > >>
> > >>
> > >
> >
> >
> > --
> > Raghav
> >
>



-- 
Raghav


Re: Reduce Kafka Client logging

2017-09-07 Thread Viktor Somogyi
Hi Raghav,

I think it is enough to raise the logging level
of org.apache.kafka.clients.producer.ProducerConfig to WARN in log4j.
Also I'd like to mention that if possible, don't recreate the Kafka
producer each time. The protocol is designed for long-living connections
and recreating the connection each time puts pressure on the TCP layer (the
connection is expensive) and also on Kafka as well which may result in
broker failures (typically exceeding the maximum allowed number of file
descriptors).

HTH,
Viktor

On Thu, Sep 7, 2017 at 7:35 AM, Raghav  wrote:

> Due to the nature of code, I have to open a connection to a different Kafka
> broker each time, and send one message. We have several Kafka brokers. So
> my client log is full with the following logs. What log settings should I
> use in log4j just for Kafka producer logs ?
>
>
> 17/09/07 04:44:04 INFO producer.ProducerConfig:180 ProducerConfig values:
> acks = all
> batch.size = 16384
> block.on.buffer.full = false
> bootstrap.servers = [10.10.10.5:]
> buffer.memory = 33554432
> client.id =
> compression.type = none
> connections.max.idle.ms = 54
> interceptor.classes = null
> key.serializer = class
> org.apache.kafka.common.serialization.StringSerializer
> linger.ms = 1
> max.block.ms = 5000
> max.in.flight.requests.per.connection = 5
> max.request.size = 1048576
> metadata.fetch.timeout.ms = 6
> metadata.max.age.ms = 30
> metric.reporters = []
> metrics.num.samples = 2
> metrics.sample.window.ms = 3
> partitioner.class = class
> org.apache.kafka.clients.producer.internals.DefaultPartitioner
> receive.buffer.bytes = 32768
> reconnect.backoff.ms = 50
> request.timeout.ms = 5000
> retries = 0
> retry.backoff.ms = 100
> sasl.kerberos.kinit.cmd = /usr/bin/kinit
> sasl.kerberos.min.time.before.relogin = 6
> sasl.kerberos.service.name = null
> sasl.kerberos.ticket.renew.jitter = 0.05
> sasl.kerberos.ticket.renew.window.factor = 0.8
> sasl.mechanism = GSSAPI
> security.protocol = PLAINTEXT
> send.buffer.bytes = 131072
> ssl.cipher.suites = null
> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> ssl.endpoint.identification.algorithm = null
> ssl.key.password = null
> ssl.keymanager.algorithm = SunX509
> ssl.keystore.location = null
> ssl.keystore.password = null
> ssl.keystore.type = JKS
> ssl.protocol = TLS
> ssl.provider = null
> ssl.secure.random.implementation = null
> ssl.trustmanager.algorithm = PKIX
> ssl.truststore.location = null
> ssl.truststore.password = null
> ssl.truststore.type = JKS
> timeout.ms = 3
> value.serializer = class
> org.apache.kafka.common.serialization.StringSerializer
>
> On Wed, Sep 6, 2017 at 9:37 PM, Jaikiran Pai 
> wrote:
>
> > Can you post the exact log messages that you are seeing?
> >
> > -Jaikiran
> >
> >
> >
> > On 07/09/17 7:55 AM, Raghav wrote:
> >
> >> Hi
> >>
> >> My Java code produces Kafka config overtime it does a send which makes
> log
> >> very very verbose.
> >>
> >> How can I reduce the Kafka client (producer) logging in my java code ?
> >>
> >> Thanks for your help.
> >>
> >>
> >
>
>
> --
> Raghav
>


Re: Reduce Kafka Client logging

2017-09-06 Thread Raghav
Due to the nature of code, I have to open a connection to a different Kafka
broker each time, and send one message. We have several Kafka brokers. So
my client log is full with the following logs. What log settings should I
use in log4j just for Kafka producer logs ?


17/09/07 04:44:04 INFO producer.ProducerConfig:180 ProducerConfig values:
acks = all
batch.size = 16384
block.on.buffer.full = false
bootstrap.servers = [10.10.10.5:]
buffer.memory = 33554432
client.id =
compression.type = none
connections.max.idle.ms = 54
interceptor.classes = null
key.serializer = class
org.apache.kafka.common.serialization.StringSerializer
linger.ms = 1
max.block.ms = 5000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.fetch.timeout.ms = 6
metadata.max.age.ms = 30
metric.reporters = []
metrics.num.samples = 2
metrics.sample.window.ms = 3
partitioner.class = class
org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.ms = 50
request.timeout.ms = 5000
retries = 0
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 6
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
timeout.ms = 3
value.serializer = class
org.apache.kafka.common.serialization.StringSerializer

On Wed, Sep 6, 2017 at 9:37 PM, Jaikiran Pai 
wrote:

> Can you post the exact log messages that you are seeing?
>
> -Jaikiran
>
>
>
> On 07/09/17 7:55 AM, Raghav wrote:
>
>> Hi
>>
>> My Java code produces Kafka config overtime it does a send which makes log
>> very very verbose.
>>
>> How can I reduce the Kafka client (producer) logging in my java code ?
>>
>> Thanks for your help.
>>
>>
>


-- 
Raghav


Re: Reduce Kafka Client logging

2017-09-06 Thread Jaikiran Pai

Can you post the exact log messages that you are seeing?

-Jaikiran


On 07/09/17 7:55 AM, Raghav wrote:

Hi

My Java code produces Kafka config overtime it does a send which makes log
very very verbose.

How can I reduce the Kafka client (producer) logging in my java code ?

Thanks for your help.





Reduce Kafka Client logging

2017-09-06 Thread Raghav
Hi

My Java code produces Kafka config overtime it does a send which makes log
very very verbose.

How can I reduce the Kafka client (producer) logging in my java code ?

Thanks for your help.

-- 
Raghav