Re: Apache Kafka integration using Apache Camel

2017-01-11 Thread Asaf Mesika
Don't specify Kafka dependencies. Camel will transitively bring it.
Otherwise you are causing version conflict.
On Mon, 9 Jan 2017 at 14:20 Kamal C  wrote:

> Can you enable DEBUG logs ? It'll be helpful to debug.
>
> -- Kamal
>
> On Mon, Jan 9, 2017 at 5:37 AM, Gupta, Swati  wrote:
>
> > Hello All,
> >
> > Any help on this would be appreciated.
> > There seems to be no error. Does it look like a version issue?
> >
> > I have updated my pom.xml with the below:
> > 
> > org.springframework.kafka
> > spring-kafka
> > 1.1.2.BUILD-SNAPSHOT
> > 
> >
> > 
> > org.apache.camel
> > camel-kafka
> > 2.17.0
> > 
> >
> > 
> > org.apache.kafka
> > kafka-clients
> > 0.10.1.0
> > 
> > 
> > org.apache.kafka
> > kafka_2.11
> > 0.10.1.0
> > 
> >
> > 
> > org.apache.camel
> > camel-core
> > 2.17.0
> > 
> >
> > Thanks & Regards
> > Swati
> >
> > -Original Message-
> > From: Gupta, Swati [mailto:swati.gu...@anz.com]
> > Sent: Friday, 6 January 2017 4:01 PM
> > To: users@kafka.apache.org
> > Subject: RE: Apache Kafka integration using Apache Camel
> >
> > Yes, the kafka console consumer displays the message correctly.
> > I also tested the same with a Java application, it works fine. There
> seems
> > to be an issue with Camel route trying to consume.
> >
> > There is no error in the console. But, the logs show as below:
> > kafka.KafkaCamelTestConsumer
> > Connected to the target VM, address: '127.0.0.1:65007', transport:
> > 'socket'
> > PID_IS_UNDEFINED: INFO  DefaultCamelContext - Apache Camel 2.17.0
> > (CamelContext: camel-1) is starting
> > PID_IS_UNDEFINED: INFO  ManagedManagementStrategy - JMX is enabled
> > PID_IS_UNDEFINED: INFO  DefaultTypeConverter - Loaded 183 type converters
> > PID_IS_UNDEFINED: INFO  DefaultRuntimeEndpointRegistry - Runtime endpoint
> > registry is in extended mode gathering usage statistics of all incoming
> and
> > outgoing endpoints (cache limit: 1000)
> > PID_IS_UNDEFINED: INFO  DefaultCamelContext - AllowUseOriginalMessage is
> > enabled. If access to the original message is not needed, then its
> > recommended to turn this option off as it may improve performance.
> > PID_IS_UNDEFINED: INFO  DefaultCamelContext - StreamCaching is not in
> use.
> > If using streams then its recommended to enable stream caching. See more
> > details at http://camel.apache.org/stream-caching.html
> > PID_IS_UNDEFINED: INFO  KafkaConsumer - Starting Kafka consumer
> > PID_IS_UNDEFINED: INFO  ConsumerConfig - ConsumerConfig values:
> > auto.commit.interval.ms = 5000
> > auto.offset.reset = earliest
> > bootstrap.servers = [localhost:9092]
> > check.crcs = true
> > client.id =
> > connections.max.idle.ms = 54
> > enable.auto.commit = true
> > exclude.internal.topics = true
> > fetch.max.bytes = 52428800
> > fetch.max.wait.ms = 500
> > fetch.min.bytes = 1024
> > group.id = testing
> > heartbeat.interval.ms = 3000
> > interceptor.classes = null
> > key.deserializer = class org.apache.kafka.common.serialization.
> > StringDeserializer
> > max.partition.fetch.bytes = 1048576
> > max.poll.interval.ms = 30
> > max.poll.records = 500
> > metadata.max.age.ms = 30
> > metric.reporters = []
> > metrics.num.samples = 2
> > metrics.sample.window.ms = 3
> > partition.assignment.strategy = [org.apache.kafka.clients.
> > consumer.RangeAssignor]
> > receive.buffer.bytes = 32768
> > reconnect.backoff.ms = 50
> > request.timeout.ms = 4
> > retry.backoff.ms = 100
> > sasl.kerberos.kinit.cmd = /usr/bin/kinit
> > sasl.kerberos.min.time.before.relogin = 6
> > sasl.kerberos.service.name = null
> > sasl.kerberos.ticket.renew.jitter = 0.05
> > sasl.kerberos.ticket.renew.window.factor = 0.8
> > sasl.mechanism = GSSAPI
> > security.protocol = PLAINTEXT
> > send.buffer.bytes = 131072
> > session.timeout.ms = 3
> > ssl.cipher.suites = null
> > ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
> > ssl.endpoint.identification.algorithm = null
> > ssl.key.password = null
> > ssl.keymanager.algorithm = SunX509
> > ssl.keystore.location = null
> > ssl.keystore.password = null
> > ssl.keystore.type = JKS
> > ssl.protocol = TLS
> > ssl.provider = null
> > ssl.secure.random.implementation = null
> > ssl.trustmanager.algorithm = PKIX
> > ssl.truststore.location = null
> > ssl.truststore.password = null
> > 

Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Jun Rao
Grant,

Thanks for all your contribution! Congratulations!

Jun

On Wed, Jan 11, 2017 at 2:51 PM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
>
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
>
> Thank you for your contributions, Grant :)
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Martin Gainty
viel gluck Herr Henke!


Martin
__




From: James Cheng 
Sent: Wednesday, January 11, 2017 5:37 PM
To: users@kafka.apache.org
Cc: d...@kafka.apache.org; priv...@kafka.apache.org
Subject: Re: [ANNOUNCE] New committer: Grant Henke

Congrats, Grant!!

-James

> On Jan 11, 2017, at 11:51 AM, Gwen Shapira  wrote:
>
> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
>
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
>
> Thank you for your contributions, Grant :)
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog



java.lang.OutOfMemoryError: Java heap space while running kafka-consumer-perf-test.sh

2017-01-11 Thread Check Peck
I am trying to run kafka performance script on my linux box. Whenever I run
"kafka-consumer-perf-test.sh", I always get an error. In the same box, I am
running "kafka-producer-perf-test.sh" as well and that is not failing at
all. Looks like something is wrong with "kafka-consumer-perf-test.sh".

I am running Kafka version 0.10.1.0.

Command I ran:
./bin/kafka-consumer-perf-test.sh --zookeeper 110.27.14.10:2181 --messages
50 --topic test-topic --threads 1

Error I got:
[2017-01-11 22:34:09,785] WARN [ConsumerFetcherThread-perf-
consumer-14195_kafka-cluster-3098529006 <(309)%20852-9006>
-zeidk-1484174043509-46a51434-2-0], Error in fetch kafka.consumer.
ConsumerFetcherThread$FetchRequest@54fb48b6 (kafka.consumer.
ConsumerFetcherThread)
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.network.NetworkReceive.
readFromReadableChannel(NetworkReceive.java:93)
at kafka.network.BlockingChannel.readCompletely(
BlockingChannel.scala:129)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:120)
at kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.
scala:99)
at kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$
$sendRequest(SimpleConsumer.scala:83)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$
apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:132)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$
apply$mcV$sp$1.apply(SimpleConsumer.scala:132)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$
apply$mcV$sp$1.apply(SimpleConsumer.scala:132)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(
SimpleConsumer.scala:131)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(
SimpleConsumer.scala:131)
at kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(
SimpleConsumer.scala:131)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:130)
at kafka.consumer.ConsumerFetcherThread.fetch(
ConsumerFetcherThread.scala:109)
at kafka.consumer.ConsumerFetcherThread.fetch(
ConsumerFetcherThread.scala:29)
at kafka.server.AbstractFetcherThread.processFetchRequest(
AbstractFetcherThread.scala:118)
at kafka.server.AbstractFetcherThread.doWork(
AbstractFetcherThread.scala:103)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread James Cheng
Congrats, Grant!!

-James

> On Jan 11, 2017, at 11:51 AM, Gwen Shapira  wrote:
> 
> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
> 
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
> 
> Thank you for your contributions, Grant :)
> 
> -- 
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog



Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Mayuresh Gharat
Congrats Grant !

Thanks,

Mayuresh

On Thu, Jan 12, 2017 at 3:47 AM, Kaufman Ng  wrote:

> Congrats Grant!
>
> On Wed, Jan 11, 2017 at 4:28 PM, Jay Kreps  wrote:
>
> > Congrats Grant!
> >
> > -Jay
> >
> > On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira 
> wrote:
> >
> > > The PMC for Apache Kafka has invited Grant Henke to join as a
> > > committer and we are pleased to announce that he has accepted!
> > >
> > > Grant contributed 88 patches, 90 code reviews, countless great
> > > comments on discussions, a much-needed cleanup to our protocol and the
> > > on-going and critical work on the Admin protocol. Throughout this, he
> > > displayed great technical judgment, high-quality work and willingness
> > > to contribute where needed to make Apache Kafka awesome.
> > >
> > > Thank you for your contributions, Grant :)
> > >
> > > --
> > > Gwen Shapira
> > > Product Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter | blog
> > >
> >
>
>
>
> --
> Kaufman Ng
> +1 646 961 8063
> Solutions Architect | Confluent | www.confluent.io
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Becket Qin
Congrats Grant!

On Wed, Jan 11, 2017 at 2:17 PM, Kaufman Ng  wrote:

> Congrats Grant!
>
> On Wed, Jan 11, 2017 at 4:28 PM, Jay Kreps  wrote:
>
> > Congrats Grant!
> >
> > -Jay
> >
> > On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira 
> wrote:
> >
> > > The PMC for Apache Kafka has invited Grant Henke to join as a
> > > committer and we are pleased to announce that he has accepted!
> > >
> > > Grant contributed 88 patches, 90 code reviews, countless great
> > > comments on discussions, a much-needed cleanup to our protocol and the
> > > on-going and critical work on the Admin protocol. Throughout this, he
> > > displayed great technical judgment, high-quality work and willingness
> > > to contribute where needed to make Apache Kafka awesome.
> > >
> > > Thank you for your contributions, Grant :)
> > >
> > > --
> > > Gwen Shapira
> > > Product Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter | blog
> > >
> >
>
>
>
> --
> Kaufman Ng
> +1 646 961 8063
> Solutions Architect | Confluent | www.confluent.io
>


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Kaufman Ng
Congrats Grant!

On Wed, Jan 11, 2017 at 4:28 PM, Jay Kreps  wrote:

> Congrats Grant!
>
> -Jay
>
> On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira  wrote:
>
> > The PMC for Apache Kafka has invited Grant Henke to join as a
> > committer and we are pleased to announce that he has accepted!
> >
> > Grant contributed 88 patches, 90 code reviews, countless great
> > comments on discussions, a much-needed cleanup to our protocol and the
> > on-going and critical work on the Admin protocol. Throughout this, he
> > displayed great technical judgment, high-quality work and willingness
> > to contribute where needed to make Apache Kafka awesome.
> >
> > Thank you for your contributions, Grant :)
> >
> > --
> > Gwen Shapira
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
> >
>



-- 
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Jay Kreps
Congrats Grant!

-Jay

On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
>
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
>
> Thank you for your contributions, Grant :)
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Matthias J. Sax
Congrats!

On 1/11/17 12:52 PM, Mickael Maison wrote:
> Congratulations Grant !
> 
> On Wed, Jan 11, 2017 at 8:29 PM, Guozhang Wang  wrote:
>> Congratulations Grant!!
>>
>> On Wed, Jan 11, 2017 at 12:06 PM, Ben Stopford  wrote:
>>
>>> Congrats Grant!!
>>> On Wed, 11 Jan 2017 at 20:01, Ismael Juma  wrote:
>>>
 Congratulations Grant, well deserved. :)

 Ismael

 On 11 Jan 2017 7:51 pm, "Gwen Shapira"  wrote:

> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
>
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
>
> Thank you for your contributions, Grant :)
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>

>>>
>>
>>
>>
>> --
>> -- Guozhang



signature.asc
Description: OpenPGP digital signature


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Mickael Maison
Congratulations Grant !

On Wed, Jan 11, 2017 at 8:29 PM, Guozhang Wang  wrote:
> Congratulations Grant!!
>
> On Wed, Jan 11, 2017 at 12:06 PM, Ben Stopford  wrote:
>
>> Congrats Grant!!
>> On Wed, 11 Jan 2017 at 20:01, Ismael Juma  wrote:
>>
>> > Congratulations Grant, well deserved. :)
>> >
>> > Ismael
>> >
>> > On 11 Jan 2017 7:51 pm, "Gwen Shapira"  wrote:
>> >
>> > > The PMC for Apache Kafka has invited Grant Henke to join as a
>> > > committer and we are pleased to announce that he has accepted!
>> > >
>> > > Grant contributed 88 patches, 90 code reviews, countless great
>> > > comments on discussions, a much-needed cleanup to our protocol and the
>> > > on-going and critical work on the Admin protocol. Throughout this, he
>> > > displayed great technical judgment, high-quality work and willingness
>> > > to contribute where needed to make Apache Kafka awesome.
>> > >
>> > > Thank you for your contributions, Grant :)
>> > >
>> > > --
>> > > Gwen Shapira
>> > > Product Manager | Confluent
>> > > 650.450.2760 | @gwenshap
>> > > Follow us: Twitter | blog
>> > >
>> >
>>
>
>
>
> --
> -- Guozhang


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Guozhang Wang
Congratulations Grant!!

On Wed, Jan 11, 2017 at 12:06 PM, Ben Stopford  wrote:

> Congrats Grant!!
> On Wed, 11 Jan 2017 at 20:01, Ismael Juma  wrote:
>
> > Congratulations Grant, well deserved. :)
> >
> > Ismael
> >
> > On 11 Jan 2017 7:51 pm, "Gwen Shapira"  wrote:
> >
> > > The PMC for Apache Kafka has invited Grant Henke to join as a
> > > committer and we are pleased to announce that he has accepted!
> > >
> > > Grant contributed 88 patches, 90 code reviews, countless great
> > > comments on discussions, a much-needed cleanup to our protocol and the
> > > on-going and critical work on the Admin protocol. Throughout this, he
> > > displayed great technical judgment, high-quality work and willingness
> > > to contribute where needed to make Apache Kafka awesome.
> > >
> > > Thank you for your contributions, Grant :)
> > >
> > > --
> > > Gwen Shapira
> > > Product Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter | blog
> > >
> >
>



-- 
-- Guozhang


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Eno Thereska
Congrats!

Eno
> On 11 Jan 2017, at 20:06, Ben Stopford  wrote:
> 
> Congrats Grant!!
> On Wed, 11 Jan 2017 at 20:01, Ismael Juma  wrote:
> 
>> Congratulations Grant, well deserved. :)
>> 
>> Ismael
>> 
>> On 11 Jan 2017 7:51 pm, "Gwen Shapira"  wrote:
>> 
>>> The PMC for Apache Kafka has invited Grant Henke to join as a
>>> committer and we are pleased to announce that he has accepted!
>>> 
>>> Grant contributed 88 patches, 90 code reviews, countless great
>>> comments on discussions, a much-needed cleanup to our protocol and the
>>> on-going and critical work on the Admin protocol. Throughout this, he
>>> displayed great technical judgment, high-quality work and willingness
>>> to contribute where needed to make Apache Kafka awesome.
>>> 
>>> Thank you for your contributions, Grant :)
>>> 
>>> --
>>> Gwen Shapira
>>> Product Manager | Confluent
>>> 650.450.2760 | @gwenshap
>>> Follow us: Twitter | blog
>>> 
>> 



Re: Is there a performance problem with new broker + old log.message.format.version + new consumer?

2017-01-11 Thread Ismael Juma
Hi Jeff,

The new consumer also supports the old message format without requiring
conversion.

Ismael

On 11 Jan 2017 6:52 pm, "Jeff Widman"  wrote:

> We upgraded our Kafka clusters from 0.8.2.1 to 0.10.0.1, but most of our
> consumers use older libraries that do not support the new message format.
> So we set the brokers' log.message.format.version to 0.8.2 while we work on
> upgrading our consumers.
>
> In the meantime, I'm worried about a performance problem with consumers
> that have upgraded and are requesting messages using the new Kafka 10
> versions of those API calls.
>
> I may be misunderstanding, but it seems logical that the performance
> problem isn't just about old consumers with a new broker. I would think the
> performance problem would also exist if we take new brokers, set the log
> format to an old version, then have our consumers make API calls using the
> Kafka 10 API calls. The broker would need to do on-the-fly conversion from
> the 0.8.2 log format up to the 0.10.0 format to send to the new consumers.
> This is the inverse problem of what's mentioned here:
> https://kafka.apache.org/documentation/#upgrade_10_performance_impact
>
> Is this a valid problem?
>


Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Jeff Widman
+1 nonbinding. We were bit by this in a production environment.

On Wed, Jan 11, 2017 at 11:42 AM, Ian Wrigley  wrote:

> +1 (non-binding)
>
> > On Jan 11, 2017, at 11:33 AM, Jay Kreps  wrote:
> >
> > +1
> >
> > On Wed, Jan 11, 2017 at 10:56 AM, Ben Stopford  wrote:
> >
> >> Looks like there was a good consensus on the discuss thread for KIP-106
> so
> >> lets move to a vote.
> >>
> >> Please chime in if you would like to change the default for
> >> unclean.leader.election.enabled from true to false.
> >>
> >> https://cwiki.apache.org/confluence/display/KAFKA/%
> >> 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
> >> election.enabled+from+True+to+False
> >>
> >> B
> >>
>
>


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Ben Stopford
Congrats Grant!!
On Wed, 11 Jan 2017 at 20:01, Ismael Juma  wrote:

> Congratulations Grant, well deserved. :)
>
> Ismael
>
> On 11 Jan 2017 7:51 pm, "Gwen Shapira"  wrote:
>
> > The PMC for Apache Kafka has invited Grant Henke to join as a
> > committer and we are pleased to announce that he has accepted!
> >
> > Grant contributed 88 patches, 90 code reviews, countless great
> > comments on discussions, a much-needed cleanup to our protocol and the
> > on-going and critical work on the Admin protocol. Throughout this, he
> > displayed great technical judgment, high-quality work and willingness
> > to contribute where needed to make Apache Kafka awesome.
> >
> > Thank you for your contributions, Grant :)
> >
> > --
> > Gwen Shapira
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
> >
>


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Ismael Juma
Congratulations Grant, well deserved. :)

Ismael

On 11 Jan 2017 7:51 pm, "Gwen Shapira"  wrote:

> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
>
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
>
> Thank you for your contributions, Grant :)
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Vahid S Hashemian
Congrats Grant!

--Vahid



From:   Sriram Subramanian 
To: users@kafka.apache.org
Cc: d...@kafka.apache.org, priv...@kafka.apache.org
Date:   01/11/2017 11:58 AM
Subject:Re: [ANNOUNCE] New committer: Grant Henke



Congratulations Grant!

On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
>
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
>
> Thank you for your contributions, Grant :)
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>






Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Jason Gustafson
Congrats!

On Wed, Jan 11, 2017 at 11:57 AM, Sriram Subramanian 
wrote:

> Congratulations Grant!
>
> On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira  wrote:
>
> > The PMC for Apache Kafka has invited Grant Henke to join as a
> > committer and we are pleased to announce that he has accepted!
> >
> > Grant contributed 88 patches, 90 code reviews, countless great
> > comments on discussions, a much-needed cleanup to our protocol and the
> > on-going and critical work on the Admin protocol. Throughout this, he
> > displayed great technical judgment, high-quality work and willingness
> > to contribute where needed to make Apache Kafka awesome.
> >
> > Thank you for your contributions, Grant :)
> >
> > --
> > Gwen Shapira
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog
> >
>


Re: [ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Sriram Subramanian
Congratulations Grant!

On Wed, Jan 11, 2017 at 11:51 AM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Grant Henke to join as a
> committer and we are pleased to announce that he has accepted!
>
> Grant contributed 88 patches, 90 code reviews, countless great
> comments on discussions, a much-needed cleanup to our protocol and the
> on-going and critical work on the Admin protocol. Throughout this, he
> displayed great technical judgment, high-quality work and willingness
> to contribute where needed to make Apache Kafka awesome.
>
> Thank you for your contributions, Grant :)
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


[ANNOUNCE] New committer: Grant Henke

2017-01-11 Thread Gwen Shapira
The PMC for Apache Kafka has invited Grant Henke to join as a
committer and we are pleased to announce that he has accepted!

Grant contributed 88 patches, 90 code reviews, countless great
comments on discussions, a much-needed cleanup to our protocol and the
on-going and critical work on the Admin protocol. Throughout this, he
displayed great technical judgment, high-quality work and willingness
to contribute where needed to make Apache Kafka awesome.

Thank you for your contributions, Grant :)

-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Ian Wrigley
+1 (non-binding)

> On Jan 11, 2017, at 11:33 AM, Jay Kreps  wrote:
> 
> +1
> 
> On Wed, Jan 11, 2017 at 10:56 AM, Ben Stopford  wrote:
> 
>> Looks like there was a good consensus on the discuss thread for KIP-106 so
>> lets move to a vote.
>> 
>> Please chime in if you would like to change the default for
>> unclean.leader.election.enabled from true to false.
>> 
>> https://cwiki.apache.org/confluence/display/KAFKA/%
>> 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
>> election.enabled+from+True+to+False
>> 
>> B
>> 



Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Jay Kreps
+1

On Wed, Jan 11, 2017 at 10:56 AM, Ben Stopford  wrote:

> Looks like there was a good consensus on the discuss thread for KIP-106 so
> lets move to a vote.
>
> Please chime in if you would like to change the default for
> unclean.leader.election.enabled from true to false.
>
> https://cwiki.apache.org/confluence/display/KAFKA/%
> 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
> election.enabled+from+True+to+False
>
> B
>


Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Grant Henke
+1

On Wed, Jan 11, 2017 at 1:23 PM, Sriram Subramanian 
wrote:

> +1
>
> On Wed, Jan 11, 2017 at 11:10 AM, Ismael Juma  wrote:
>
> > Thanks for raising this, +1.
> >
> > Ismael
> >
> > On Wed, Jan 11, 2017 at 6:56 PM, Ben Stopford  wrote:
> >
> > > Looks like there was a good consensus on the discuss thread for KIP-106
> > so
> > > lets move to a vote.
> > >
> > > Please chime in if you would like to change the default for
> > > unclean.leader.election.enabled from true to false.
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/%
> > > 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
> > > election.enabled+from+True+to+False
> > >
> > > B
> > >
> >
>



-- 
Grant Henke
Software Engineer | Cloudera
gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke


Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Sriram Subramanian
+1

On Wed, Jan 11, 2017 at 11:10 AM, Ismael Juma  wrote:

> Thanks for raising this, +1.
>
> Ismael
>
> On Wed, Jan 11, 2017 at 6:56 PM, Ben Stopford  wrote:
>
> > Looks like there was a good consensus on the discuss thread for KIP-106
> so
> > lets move to a vote.
> >
> > Please chime in if you would like to change the default for
> > unclean.leader.election.enabled from true to false.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/%
> > 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
> > election.enabled+from+True+to+False
> >
> > B
> >
>


Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Ismael Juma
Thanks for raising this, +1.

Ismael

On Wed, Jan 11, 2017 at 6:56 PM, Ben Stopford  wrote:

> Looks like there was a good consensus on the discuss thread for KIP-106 so
> lets move to a vote.
>
> Please chime in if you would like to change the default for
> unclean.leader.election.enabled from true to false.
>
> https://cwiki.apache.org/confluence/display/KAFKA/%
> 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
> election.enabled+from+True+to+False
>
> B
>


Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Onur Karaman
+1

On Wed, Jan 11, 2017 at 11:06 AM, Jason Gustafson 
wrote:

> +1
>
> On Wed, Jan 11, 2017 at 10:56 AM, Ben Stopford  wrote:
>
> > Looks like there was a good consensus on the discuss thread for KIP-106
> so
> > lets move to a vote.
> >
> > Please chime in if you would like to change the default for
> > unclean.leader.election.enabled from true to false.
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/%
> > 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
> > election.enabled+from+True+to+False
> >
> > B
> >
>


Re: [VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Jason Gustafson
+1

On Wed, Jan 11, 2017 at 10:56 AM, Ben Stopford  wrote:

> Looks like there was a good consensus on the discuss thread for KIP-106 so
> lets move to a vote.
>
> Please chime in if you would like to change the default for
> unclean.leader.election.enabled from true to false.
>
> https://cwiki.apache.org/confluence/display/KAFKA/%
> 5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.
> election.enabled+from+True+to+False
>
> B
>


[VOTE] KIP-106 - Default unclean.leader.election.enabled True => False

2017-01-11 Thread Ben Stopford
Looks like there was a good consensus on the discuss thread for KIP-106 so
lets move to a vote.

Please chime in if you would like to change the default for
unclean.leader.election.enabled from true to false.

https://cwiki.apache.org/confluence/display/KAFKA/%5BWIP%5D+KIP-106+-+Change+Default+unclean.leader.election.enabled+from+True+to+False

B


Re: [VOTE] Vote for KIP-101 - Leader Epochs

2017-01-11 Thread Ben Stopford
OK - my mistake was mistaken! There is consensus. This KIP has been
accepted.

On Wed, Jan 11, 2017 at 6:48 PM Ben Stopford  wrote:

> Sorry - my mistake. Looks like I still need one more binding vote. Is
> there a committer out there that could add their vote?
>
> B
>
> On Wed, Jan 11, 2017 at 6:44 PM Ben Stopford  wrote:
>
> So I believe we can mark this as Accepted. I've updated the KIP page.
> Thanks for the input everyone.
>
> On Fri, Jan 6, 2017 at 9:31 AM Ben Stopford  wrote:
>
> Thanks Joel. I'll fix up the pics to make them consistent on nomenclature.
>
>
> B
>
> On Fri, Jan 6, 2017 at 2:39 AM Joel Koshy  wrote:
>
> (adding the dev list back - as it seems to have gotten dropped earlier in
> this thread)
>
> On Thu, Jan 5, 2017 at 6:36 PM, Joel Koshy  wrote:
>
> > +1
> >
> > This is a very well-written KIP!
> > Minor: there is still a mix of terms in the doc that references the
> > earlier LeaderGenerationRequest (which is what I'm assuming what it was
> > called in previous versions of the wiki). Same for the diagrams which I'm
> > guessing are a little harder to make consistent with the text.
> >
> >
> >
> > On Thu, Jan 5, 2017 at 5:54 PM, Jun Rao  wrote:
> >
> >> Hi, Ben,
> >>
> >> Thanks for the updated KIP. +1
> >>
> >> 1) In OffsetForLeaderEpochResponse, start_offset probably should be
> >> end_offset since it's the end offset of that epoch.
> >> 3) That's fine. We can fix KAFKA-1120 separately.
> >>
> >> Jun
> >>
> >>
> >> On Thu, Jan 5, 2017 at 11:11 AM, Ben Stopford  wrote:
> >>
> >> > Hi Jun
> >> >
> >> > Thanks for raising these points. Thorough as ever!
> >> >
> >> > 1) Changes made as requested.
> >> > 2) Done.
> >> > 3) My plan for handing returning leaders is to simply to force the
> >> Leader
> >> > Epoch to increment if a leader returns. I don't plan to fix KAFKA-1120
> >> as
> >> > part of this KIP. It is really a separate issue with wider
> implications.
> >> > I'd be happy to add KAFKA-1120 into the release though if we have
> time.
> >> > 4) Agreed. Not sure exactly how that's going to play out, but I think
> >> we're
> >> > on the same page.
> >> >
> >> > Please could
> >> >
> >> > Cheers
> >> > B
> >> >
> >> > On Thu, Jan 5, 2017 at 12:50 AM Jun Rao  wrote:
> >> >
> >> > > Hi, Ben,
> >> > >
> >> > > Thanks for the proposal. Looks good overall. A few comments below.
> >> > >
> >> > > 1. For LeaderEpochRequest, we need to include topic right? We
> probably
> >> > want
> >> > > to follow other requests by nesting partition inside topic? For
> >> > > LeaderEpochResponse,
> >> > > do we need to return leader_epoch? I was thinking that we could just
> >> > return
> >> > > an end_offset, which is the next offset of the last message in the
> >> > > requested leader generation. Finally, would
> >> OffsetForLeaderEpochRequest
> >> > be
> >> > > a better name?
> >> > >
> >> > > 2. We should bump up both the produce request and the fetch request
> >> > > protocol version since both include the message set.
> >> > >
> >> > > 3. Extending LeaderEpoch to include Returning Leaders: To support
> >> this,
> >> > do
> >> > > you plan to use the approach that stores  CZXID in the broker
> >> > registration
> >> > > and including the CZXID of the leader in /brokers/topics/[topic]/
> >> > > partitions/[partitionId]/state in ZK?
> >> > >
> >> > > 4. Since there are a few other KIPs involving message format too, it
> >> > would
> >> > > be useful to consider if we could combine the message format changes
> >> in
> >> > the
> >> > > same release.
> >> > >
> >> > > Thanks,
> >> > >
> >> > > Jun
> >> > >
> >> > >
> >> > > On Wed, Jan 4, 2017 at 9:24 AM, Ben Stopford 
> >> wrote:
> >> > >
> >> > > > Hi All
> >> > > >
> >> > > > We’re having some problems with this thread being subsumed by the
> >> > > > [Discuss] thread. Hopefully this one will appear distinct. If you
> >> see
> >> > > more
> >> > > > than one, please use this one.
> >> > > >
> >> > > > KIP-101 should now be ready for a vote. As a reminder the KIP
> >> proposes
> >> > a
> >> > > > change to the replication protocol to remove the potential for
> >> replicas
> >> > > to
> >> > > > diverge.
> >> > > >
> >> > > > The KIP can be found here:  https://cwiki.apache.org/confl
> >> > > > uence/display/KAFKA/KIP-101+-+Alter+Replication+Protocol+to+
> >> > > > use+Leader+Epoch+rather+than+High+Watermark+for+Truncation <
> >> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-101+-
> >> > > > +Alter+Replication+Protocol+to+use+Leader+Epoch+rather+
> >> > > > than+High+Watermark+for+Truncation>
> >> > > >
> >> > > > Please let us know your vote.
> >> > > >
> >> > > > B
> >> > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> >
> >
>
>


Is there a performance problem with new broker + old log.message.format.version + new consumer?

2017-01-11 Thread Jeff Widman
We upgraded our Kafka clusters from 0.8.2.1 to 0.10.0.1, but most of our
consumers use older libraries that do not support the new message format.
So we set the brokers' log.message.format.version to 0.8.2 while we work on
upgrading our consumers.

In the meantime, I'm worried about a performance problem with consumers
that have upgraded and are requesting messages using the new Kafka 10
versions of those API calls.

I may be misunderstanding, but it seems logical that the performance
problem isn't just about old consumers with a new broker. I would think the
performance problem would also exist if we take new brokers, set the log
format to an old version, then have our consumers make API calls using the
Kafka 10 API calls. The broker would need to do on-the-fly conversion from
the 0.8.2 log format up to the 0.10.0 format to send to the new consumers.
This is the inverse problem of what's mentioned here:
https://kafka.apache.org/documentation/#upgrade_10_performance_impact

Is this a valid problem?


Re: [VOTE] Vote for KIP-101 - Leader Epochs

2017-01-11 Thread Ben Stopford
Sorry - my mistake. Looks like I still need one more binding vote. Is there
a committer out there that could add their vote?

B

On Wed, Jan 11, 2017 at 6:44 PM Ben Stopford  wrote:

> So I believe we can mark this as Accepted. I've updated the KIP page.
> Thanks for the input everyone.
>
> On Fri, Jan 6, 2017 at 9:31 AM Ben Stopford  wrote:
>
> Thanks Joel. I'll fix up the pics to make them consistent on nomenclature.
>
>
> B
>
> On Fri, Jan 6, 2017 at 2:39 AM Joel Koshy  wrote:
>
> (adding the dev list back - as it seems to have gotten dropped earlier in
> this thread)
>
> On Thu, Jan 5, 2017 at 6:36 PM, Joel Koshy  wrote:
>
> > +1
> >
> > This is a very well-written KIP!
> > Minor: there is still a mix of terms in the doc that references the
> > earlier LeaderGenerationRequest (which is what I'm assuming what it was
> > called in previous versions of the wiki). Same for the diagrams which I'm
> > guessing are a little harder to make consistent with the text.
> >
> >
> >
> > On Thu, Jan 5, 2017 at 5:54 PM, Jun Rao  wrote:
> >
> >> Hi, Ben,
> >>
> >> Thanks for the updated KIP. +1
> >>
> >> 1) In OffsetForLeaderEpochResponse, start_offset probably should be
> >> end_offset since it's the end offset of that epoch.
> >> 3) That's fine. We can fix KAFKA-1120 separately.
> >>
> >> Jun
> >>
> >>
> >> On Thu, Jan 5, 2017 at 11:11 AM, Ben Stopford  wrote:
> >>
> >> > Hi Jun
> >> >
> >> > Thanks for raising these points. Thorough as ever!
> >> >
> >> > 1) Changes made as requested.
> >> > 2) Done.
> >> > 3) My plan for handing returning leaders is to simply to force the
> >> Leader
> >> > Epoch to increment if a leader returns. I don't plan to fix KAFKA-1120
> >> as
> >> > part of this KIP. It is really a separate issue with wider
> implications.
> >> > I'd be happy to add KAFKA-1120 into the release though if we have
> time.
> >> > 4) Agreed. Not sure exactly how that's going to play out, but I think
> >> we're
> >> > on the same page.
> >> >
> >> > Please could
> >> >
> >> > Cheers
> >> > B
> >> >
> >> > On Thu, Jan 5, 2017 at 12:50 AM Jun Rao  wrote:
> >> >
> >> > > Hi, Ben,
> >> > >
> >> > > Thanks for the proposal. Looks good overall. A few comments below.
> >> > >
> >> > > 1. For LeaderEpochRequest, we need to include topic right? We
> probably
> >> > want
> >> > > to follow other requests by nesting partition inside topic? For
> >> > > LeaderEpochResponse,
> >> > > do we need to return leader_epoch? I was thinking that we could just
> >> > return
> >> > > an end_offset, which is the next offset of the last message in the
> >> > > requested leader generation. Finally, would
> >> OffsetForLeaderEpochRequest
> >> > be
> >> > > a better name?
> >> > >
> >> > > 2. We should bump up both the produce request and the fetch request
> >> > > protocol version since both include the message set.
> >> > >
> >> > > 3. Extending LeaderEpoch to include Returning Leaders: To support
> >> this,
> >> > do
> >> > > you plan to use the approach that stores  CZXID in the broker
> >> > registration
> >> > > and including the CZXID of the leader in /brokers/topics/[topic]/
> >> > > partitions/[partitionId]/state in ZK?
> >> > >
> >> > > 4. Since there are a few other KIPs involving message format too, it
> >> > would
> >> > > be useful to consider if we could combine the message format changes
> >> in
> >> > the
> >> > > same release.
> >> > >
> >> > > Thanks,
> >> > >
> >> > > Jun
> >> > >
> >> > >
> >> > > On Wed, Jan 4, 2017 at 9:24 AM, Ben Stopford 
> >> wrote:
> >> > >
> >> > > > Hi All
> >> > > >
> >> > > > We’re having some problems with this thread being subsumed by the
> >> > > > [Discuss] thread. Hopefully this one will appear distinct. If you
> >> see
> >> > > more
> >> > > > than one, please use this one.
> >> > > >
> >> > > > KIP-101 should now be ready for a vote. As a reminder the KIP
> >> proposes
> >> > a
> >> > > > change to the replication protocol to remove the potential for
> >> replicas
> >> > > to
> >> > > > diverge.
> >> > > >
> >> > > > The KIP can be found here:  https://cwiki.apache.org/confl
> >> > > > uence/display/KAFKA/KIP-101+-+Alter+Replication+Protocol+to+
> >> > > > use+Leader+Epoch+rather+than+High+Watermark+for+Truncation <
> >> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-101+-
> >> > > > +Alter+Replication+Protocol+to+use+Leader+Epoch+rather+
> >> > > > than+High+Watermark+for+Truncation>
> >> > > >
> >> > > > Please let us know your vote.
> >> > > >
> >> > > > B
> >> > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> >
> >
>
>


Re: [VOTE] Vote for KIP-101 - Leader Epochs

2017-01-11 Thread Ben Stopford
So I believe we can mark this as Accepted. I've updated the KIP page.
Thanks for the input everyone.

On Fri, Jan 6, 2017 at 9:31 AM Ben Stopford  wrote:

> Thanks Joel. I'll fix up the pics to make them consistent on nomenclature.
>
>
> B
>
> On Fri, Jan 6, 2017 at 2:39 AM Joel Koshy  wrote:
>
> (adding the dev list back - as it seems to have gotten dropped earlier in
> this thread)
>
> On Thu, Jan 5, 2017 at 6:36 PM, Joel Koshy  wrote:
>
> > +1
> >
> > This is a very well-written KIP!
> > Minor: there is still a mix of terms in the doc that references the
> > earlier LeaderGenerationRequest (which is what I'm assuming what it was
> > called in previous versions of the wiki). Same for the diagrams which I'm
> > guessing are a little harder to make consistent with the text.
> >
> >
> >
> > On Thu, Jan 5, 2017 at 5:54 PM, Jun Rao  wrote:
> >
> >> Hi, Ben,
> >>
> >> Thanks for the updated KIP. +1
> >>
> >> 1) In OffsetForLeaderEpochResponse, start_offset probably should be
> >> end_offset since it's the end offset of that epoch.
> >> 3) That's fine. We can fix KAFKA-1120 separately.
> >>
> >> Jun
> >>
> >>
> >> On Thu, Jan 5, 2017 at 11:11 AM, Ben Stopford  wrote:
> >>
> >> > Hi Jun
> >> >
> >> > Thanks for raising these points. Thorough as ever!
> >> >
> >> > 1) Changes made as requested.
> >> > 2) Done.
> >> > 3) My plan for handing returning leaders is to simply to force the
> >> Leader
> >> > Epoch to increment if a leader returns. I don't plan to fix KAFKA-1120
> >> as
> >> > part of this KIP. It is really a separate issue with wider
> implications.
> >> > I'd be happy to add KAFKA-1120 into the release though if we have
> time.
> >> > 4) Agreed. Not sure exactly how that's going to play out, but I think
> >> we're
> >> > on the same page.
> >> >
> >> > Please could
> >> >
> >> > Cheers
> >> > B
> >> >
> >> > On Thu, Jan 5, 2017 at 12:50 AM Jun Rao  wrote:
> >> >
> >> > > Hi, Ben,
> >> > >
> >> > > Thanks for the proposal. Looks good overall. A few comments below.
> >> > >
> >> > > 1. For LeaderEpochRequest, we need to include topic right? We
> probably
> >> > want
> >> > > to follow other requests by nesting partition inside topic? For
> >> > > LeaderEpochResponse,
> >> > > do we need to return leader_epoch? I was thinking that we could just
> >> > return
> >> > > an end_offset, which is the next offset of the last message in the
> >> > > requested leader generation. Finally, would
> >> OffsetForLeaderEpochRequest
> >> > be
> >> > > a better name?
> >> > >
> >> > > 2. We should bump up both the produce request and the fetch request
> >> > > protocol version since both include the message set.
> >> > >
> >> > > 3. Extending LeaderEpoch to include Returning Leaders: To support
> >> this,
> >> > do
> >> > > you plan to use the approach that stores  CZXID in the broker
> >> > registration
> >> > > and including the CZXID of the leader in /brokers/topics/[topic]/
> >> > > partitions/[partitionId]/state in ZK?
> >> > >
> >> > > 4. Since there are a few other KIPs involving message format too, it
> >> > would
> >> > > be useful to consider if we could combine the message format changes
> >> in
> >> > the
> >> > > same release.
> >> > >
> >> > > Thanks,
> >> > >
> >> > > Jun
> >> > >
> >> > >
> >> > > On Wed, Jan 4, 2017 at 9:24 AM, Ben Stopford 
> >> wrote:
> >> > >
> >> > > > Hi All
> >> > > >
> >> > > > We’re having some problems with this thread being subsumed by the
> >> > > > [Discuss] thread. Hopefully this one will appear distinct. If you
> >> see
> >> > > more
> >> > > > than one, please use this one.
> >> > > >
> >> > > > KIP-101 should now be ready for a vote. As a reminder the KIP
> >> proposes
> >> > a
> >> > > > change to the replication protocol to remove the potential for
> >> replicas
> >> > > to
> >> > > > diverge.
> >> > > >
> >> > > > The KIP can be found here:  https://cwiki.apache.org/confl
> >> > > > uence/display/KAFKA/KIP-101+-+Alter+Replication+Protocol+to+
> >> > > > use+Leader+Epoch+rather+than+High+Watermark+for+Truncation <
> >> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-101+-
> >> > > > +Alter+Replication+Protocol+to+use+Leader+Epoch+rather+
> >> > > > than+High+Watermark+for+Truncation>
> >> > > >
> >> > > > Please let us know your vote.
> >> > > >
> >> > > > B
> >> > > >
> >> > > >
> >> > > >
> >> > > >
> >> > > >
> >> > >
> >> >
> >>
> >
> >
>
>


Re: Interpreting value of JMX metric for partition count

2017-01-11 Thread Gwen Shapira
The metric is "partition count per broker". You didn't mention how
many brokers and how many replicas you have for each topic, but if you
have 1 replica and 2 brokers, then this looks reasonable. Probably 35
partitions on one broker and 33 on the other or something similar. I'd
recommend getting at least 3 brokers and having at least 3 replicas on
each topic though - for availability.

On Tue, Jan 10, 2017 at 7:25 PM, Abhishek Agrawal
 wrote:
> Hello Kafka Users,
>
>My kafka cluster has two topics: one with 50 partitions, another with 18
> partitions.
>
> The JMX bean *kafka.server:name=PartitionCount,type=ReplicaManager*, gives
> value as 35 when I try to probe using JMXTerm
>
> $>get -b kafka.server:name=PartitionCount,type=ReplicaManager Value
> #mbean = kafka.server:name=PartitionCount,type=ReplicaManager:
> Value = 35;
>
>
> Can someone help me understand if this value is supposed to be 'average
> partition count per topic'  or 'total partition count for all topics'?
>
> I want to have separate JMX metric for partition count of each topic. Can
> someone point me to configuration examples where this has been achieved?
>
> Regards,
> Abhishek



-- 
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


RE: kafka-connect log4j

2017-01-11 Thread Berryman, Eric

Silly mistake on my part.

Thank you very much for your time!

(I'll submit a pull request as mentioned)



-Original Message-
From: Gwen Shapira [mailto:g...@confluent.io] 
Sent: Wednesday, January 11, 2017 1:25 AM
To: Users 
Subject: Re: kafka-connect log4j

btw. It bugs me a bit that Connect logs to console and not to file by default. 
I think tools should log to console, but Connect is more of a service / daemon 
and should log to file like the brokers do. So when you get your log4j config 
to work, perhaps submit a PR to Apache Kafka so we'll all enjoy this? :)

On Tue, Jan 10, 2017 at 10:23 PM, Gwen Shapira  wrote:
> Is your goal to simply log connect to file rather than to the console?
> In this case your configuration is almost right. Just change the first 
> line in connect-log4j.properties to:
>
> log4j.rootLogger=INFO, stdout, connectAppender
>
> and then add the lines you have in your email.
>
> Or you can get rid of stdout appender completely if you prefer.
>
> You may find the log4j primer useful:
> https://logging.apache.org/log4j/1.2/manual.html
>
> On Tue, Jan 10, 2017 at 7:42 AM, Berryman, Eric  wrote:
>> Hello!
>>
>> Is there a log4j.appender.connectAppender?
>>
>> I noticed there is a log4j.appender.kafkaAppender.
>> I was hoping to setup the connect-log4j.properties like kafka's.
>>
>> log4j.appender.connectAppender=org.apache.log4j.DailyRollingFileAppen
>> der log4j.appender.connectAppender.DatePattern='.'-MM-dd-HH
>> log4j.appender.connectAppender.File=${kafka.logs.dir}/connect.log
>> log4j.appender.connectAppender.layout=org.apache.log4j.PatternLayout
>> log4j.appender.connectAppender.layout.ConversionPattern=[%d] %p %m 
>> (%c:%L)%n
>>
>> Thank you!
>> Eric
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog



--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Re: Interpreting value of JMX metric for partition count

2017-01-11 Thread Sreejith S
Hi Abhishek,

Check this if you are looking for a Java implementation. This will give you
all Kafka related metrics.

https://github.com/srijiths/kafka-connectors/tree/master/kafka-connect-jmx

Regards,
Srijith

On Wed, Jan 11, 2017 at 8:55 AM, Abhishek Agrawal <
abhishek.agrawal.1...@gmail.com> wrote:

> Hello Kafka Users,
>
>My kafka cluster has two topics: one with 50 partitions, another with 18
> partitions.
>
> The JMX bean *kafka.server:name=PartitionCount,type=ReplicaManager*, gives
> value as 35 when I try to probe using JMXTerm
>
> $>get -b kafka.server:name=PartitionCount,type=ReplicaManager Value
> #mbean = kafka.server:name=PartitionCount,type=ReplicaManager:
> Value = 35;
>
>
> Can someone help me understand if this value is supposed to be 'average
> partition count per topic'  or 'total partition count for all topics'?
>
> I want to have separate JMX metric for partition count of each topic. Can
> someone point me to configuration examples where this has been achieved?
>
> Regards,
> Abhishek
>



-- 


*Sreejith.S*
https://github.com/srijiths/
http://srijiths.wordpress.com/
tweet2sree@twitter 


Interpreting value of JMX metric for partition count

2017-01-11 Thread Abhishek Agrawal
Hello Kafka Users,

   My kafka cluster has two topics: one with 50 partitions, another with 18
partitions.

The JMX bean *kafka.server:name=PartitionCount,type=ReplicaManager*, gives
value as 35 when I try to probe using JMXTerm

$>get -b kafka.server:name=PartitionCount,type=ReplicaManager Value
#mbean = kafka.server:name=PartitionCount,type=ReplicaManager:
Value = 35;


Can someone help me understand if this value is supposed to be 'average
partition count per topic'  or 'total partition count for all topics'?

I want to have separate JMX metric for partition count of each topic. Can
someone point me to configuration examples where this has been achieved?

Regards,
Abhishek


Re: changelog topic with in-memory state store

2017-01-11 Thread Damian Guy
Hi,

There is no way to enable caching on in-memory-store - by definition it is
already cached. However the in-memory store will write each update to the
changelog (regardless of context.commit), which seems to be the issue you
have?

When you say large, how large? Have you tested it and observed that it puts
load on the broker?

Thanks,
Damian

On Wed, 11 Jan 2017 at 06:10 Daisuke Moriya  wrote:

Hi.

I am developing a simple log counting application using Kafka Streams
0.10.1.1.
Its implementation is almost the same as the WordCountProcessor in the
confluent document [
http://docs.confluent.io/3.1.1/streams/developer-guide.html#processor-api].
I am using in-memory state store,
its key is the ID of log category, value is count.
All the changelogs are written to a broker by context.commit() for fault
tolerance,
but since the data I handle is large and the size of key is large, it takes
a long time to process.
Even if it is compacted with a broker, this will put a load on the broker.
I would like to write only the latest records for each key on the broker
instead of all changelogs at context.commit().
This will reduce the load on the broker and I do not think
there will be any negative impact on fault tolerance.
If I use the persistent state store, I can do this by enabling caching,
but I couldn't find how to accomplish this with the in-memory state store.
Can I do this?

Thank you,
--
Daisuke