Re: Frequent appearance of "Marking the coordinator dead" message in consumer log

2018-08-30 Thread Shantanu Deshmukh
I have noticed a very strange behaviour in case of session.timeout.ms
setting
There are 2 topics for which message processing takes long time. So I had
increased session time out there to 5 mins. max.poll.records was kept at
10. Consumers for these topics would start consuming after 5-10 minutes. I
reset session.timeout.ms to default value and now consumers subscribe and
start consuming immediately. Also rebalances have also reduced.

Now what is this? When rebalance occurs message in log is reads that you
need to increase session.timeout.ms or reduce max.poll.ms. Now if I
increase session.timeout.ms to any value above default consumers start very
slow. Has anyone seen such behaviour or explain me why this is hapening?

On Wed, Aug 29, 2018 at 12:04 PM Shantanu Deshmukh 
wrote:

> Hi Ryanne,
>
> Thanks for your response. I had even tried with 5 records and session
> timeout as big as 10 minutes. Logs still showed that consumer group
> rebalanced many times.
> Also there is another mystery, some CGs take upto 10 minutes to subscribe
> to topic and start consumption. Why might that be happening, any idea?
>
> On Tue, Aug 28, 2018 at 8:44 PM Ryanne Dolan 
> wrote:
>
>> Shantanu,
>>
>> Sounds like your consumers are processing too many records between
>> poll()s.
>> Notice that max.poll.records is 50. If your consumer is taking up to 200ms
>> to process each record, then you'd see up to 10 seconds between poll()s.
>>
>> If a consumer doesn't call poll() frequently enough, Kafka will consider
>> the consumer to be dead and will rebalance away from it. Since all your
>> consumers are in this state, your consumer group is constantly
>> rebalancing.
>>
>> Fix is easy: reduce max.poll.records.
>>
>> Ryanne
>>
>> On Tue, Aug 28, 2018 at 6:34 AM Shantanu Deshmukh 
>> wrote:
>>
>> > Someone, please help me. Only 1 or 2 out of 7 consumer groups keep
>> > rebalancing every 5-10mins. One topic is constantly receiving 10-20
>> > msg/sec. The other one receives a bulk load after many hours of
>> inactivity.
>> > CGs for both these topics are different. So, I see no observable pattern
>> > here.
>> >
>> > On Wed, Aug 22, 2018 at 5:47 PM Shantanu Deshmukh <
>> shantanu...@gmail.com>
>> > wrote:
>> >
>> > > I know average time of processing one record, it is about 70-80ms. I
>> have
>> > > set session.timeout.ms so high total processing time for one poll
>> > > invocation should be well within it.
>> > >
>> > > On Wed, Aug 22, 2018 at 5:04 PM Steve Tian 
>> > > wrote:
>> > >
>> > >> Have you measured the duration between two `poll` invocations and the
>> > size
>> > >> of returned `ConsumrRecords`?
>> > >>
>> > >> On Wed, Aug 22, 2018, 7:00 PM Shantanu Deshmukh <
>> shantanu...@gmail.com>
>> > >> wrote:
>> > >>
>> > >> > Ohh sorry, my bad. Kafka version is 0.10.1.0 indeed and so is the
>> > >> client.
>> > >> >
>> > >> > On Wed, Aug 22, 2018 at 4:26 PM Steve Tian <
>> steve.cs.t...@gmail.com>
>> > >> > wrote:
>> > >> >
>> > >> > > NVM.  What's your client version?  I'm asking as
>> > max.poll.interval.ms
>> > >> > > should be introduced since 0.10.1.0, which is not the version you
>> > >> > mentioned
>> > >> > > in the email thread.
>> > >> > >
>> > >> > > On Wed, Aug 22, 2018, 6:51 PM Shantanu Deshmukh <
>> > >> shantanu...@gmail.com>
>> > >> > > wrote:
>> > >> > >
>> > >> > > > How do I check for GC pausing?
>> > >> > > >
>> > >> > > > On Wed, Aug 22, 2018 at 4:12 PM Steve Tian <
>> > steve.cs.t...@gmail.com
>> > >> >
>> > >> > > > wrote:
>> > >> > > >
>> > >> > > > > Did you observed any GC-pausing?
>> > >> > > > >
>> > >> > > > > On Wed, Aug 22, 2018, 6:38 PM Shantanu Deshmukh <
>> > >> > shantanu...@gmail.com
>> > >> > > >
>> > >> > > > > wrote:
>> > >> > > > >
>> > >> > > > > > Hi Steve,
>> > >> > > > > >
>> > >> > > > > > Application is just sending mails. Every record is just a
>> > email
>> > >> > > request
>> > >> > > > > > with very basic business logic. Generally it doesn't take
>> more
>> > >> than
>> > >> > > > 200ms
>> > >> > > > > > to process a single mail. Currently it is averaging out at
>> > 70-80
>> > >> > ms.
>> > >> > > > > >
>> > >> > > > > > On Wed, Aug 22, 2018 at 3:06 PM Steve Tian <
>> > >> > steve.cs.t...@gmail.com>
>> > >> > > > > > wrote:
>> > >> > > > > >
>> > >> > > > > > > How long did it take to process 50 `ConsumerRecord`s?
>> > >> > > > > > >
>> > >> > > > > > > On Wed, Aug 22, 2018, 5:16 PM Shantanu Deshmukh <
>> > >> > > > shantanu...@gmail.com
>> > >> > > > > >
>> > >> > > > > > > wrote:
>> > >> > > > > > >
>> > >> > > > > > > > Hello,
>> > >> > > > > > > >
>> > >> > > > > > > > We have Kafka 0.10.0.1 running on a 3 broker cluster.
>> We
>> > >> have
>> > >> > an
>> > >> > > > > > > > application which consumes from a topic having 10
>> > >> partitions.
>> > >> > 10
>> > >> > > > > > > consumers
>> > >> > > > > > > > are spawned from this process, they belong to one
>> consumer
>> > >> > group.
>> > >> > > > > > > >
>> > >> > > > > > > > What we have observed is that very frequently we 

Kafka streams: topic partitions->consumer 1:1 mapping not happening

2018-08-30 Thread kaustubh khasnis
Hi,
I have written a streams application to talk to topic on cluster of 5
brokers with 10 partitions. I have tried multiple combinations here like 10
application instances (on 10 different machines) with 1 stream thread each,
5 instances with 2 threads each. But for some reason, when I check in kafka
manager, the 1:1 mapping between partition and stream thread is not
happening. Some of the threads are picking up 2 partitions while some
picking up none. Can you please help me with same?? All threads are part of
same group and subscribed to only one topic.

Thanks a lot for your help
Kaustubh


Re: [DISCUSS] KIP-369 Alternative Partitioner to Support "Always Round-Robin" Selection

2018-08-30 Thread Stephen Powis
Neat, this would be super helpful! I submitted this ages ago:
https://issues.apache.org/jira/browse/KAFKA-

On Fri, Aug 31, 2018 at 5:04 AM, Satish Duggana 
wrote:

> +including both dev and user mailing lists.
>
> Hi,
> Thanks for the KIP.
>
> "* For us, the message keys represent some metadata which we use to either
> ignore messages (if a loop-back to the sender), or log some information.*"
>
> Above statement was mentioned in the KIP about how key value is used. I
> guess the topic is not configured to be compacted and you do not want to
> have partitioning based on that key. IMHO, it qualifies more as a header
> than a key. What do you think about building records with a specific header
> and consumers to execute the logic whether to process or ignore the
> messages based on that header value.
>
> Thanks,
> Satish.
>
>
> On Fri, Aug 31, 2018 at 1:32 AM, Satish Duggana 
> wrote:
>
> > Hi,
> > Thanks for the KIP.
> >
> > "* For us, the message keys represent some metadata which we use to
> > either ignore messages (if a loop-back to the sender), or log some
> > information.*"
> >
> > Above statement was mentioned in the KIP about how key value is used. I
> > guess the topic is not configured to be compacted and you do not want to
> > have partitioning based on that key. IMHO, it qualifies more as a header
> > than a key. What do you think about building records with a specific
> header
> > and consumers to execute the logic whether to process or ignore the
> > messages based on that header value.
> >
> > Thanks,
> > Satish.
> >
> >
> > On Fri, Aug 31, 2018 at 12:02 AM, M. Manna  wrote:
> >
> >> Hi Harsha,
> >>
> >> thanks for reading the KIP.
> >>
> >> The intent is to use the DefaultPartitioner logic for round-robin
> >> selection
> >> of partition regardless of key type.
> >>
> >> Implementing Partitioner interface isn’t the issue here, you would have
> to
> >> do that anyway if  you are implementing your own. But we also want this
> to
> >> be part of formal codebase.
> >>
> >> Regards,
> >>
> >> On Thu, 30 Aug 2018 at 16:58, Harsha  wrote:
> >>
> >> > Hi,
> >> >   Thanks for the KIP. I am trying to understand the intent of the
> >> > KIP.  Is the use case you specified can't be achieved by implementing
> >> the
> >> > Partitioner interface here?
> >> > https://github.com/apache/kafka/blob/trunk/clients/src/main/
> >> java/org/apache/kafka/clients/producer/Partitioner.java#L28
> >> > .
> >> > Use your custom partitioner to be configured in your producer clients.
> >> >
> >> > Thanks,
> >> > Harsha
> >> >
> >> > On Thu, Aug 30, 2018, at 1:45 AM, M. Manna wrote:
> >> > > Hello,
> >> > >
> >> > > I opened a very simple KIP and there exists a JIRA for it.
> >> > >
> >> > > I would be grateful if any comments are available for action.
> >> > >
> >> > > Regards,
> >> >
> >>
> >
> >
>


Re: Kafka Polling alternate

2018-08-30 Thread Satish Duggana
There is no push mechanism available for consumers in Kafka. What is the
current timeout passed to KafkaConsumer#poll(timeout)? You can increase
that timeout to avoid calling poll frequently.

Thanks,
Satish.

On Fri, Aug 31, 2018 at 12:27 AM, Pulkit Manchanda 
wrote:

> HI All,
> I have a consumer application continuously polling for the record in a
> while loop wasting the CPU cycles.
> Is there any alternative like I get a callback/event from Kafka sooner the
> producer publishes the record to the topic.
> Thanks
> Pulkit
>


Re: [DISCUSS] KIP-369 Alternative Partitioner to Support "Always Round-Robin" Selection

2018-08-30 Thread Satish Duggana
+including both dev and user mailing lists.

Hi,
Thanks for the KIP.

"* For us, the message keys represent some metadata which we use to either
ignore messages (if a loop-back to the sender), or log some information.*"

Above statement was mentioned in the KIP about how key value is used. I
guess the topic is not configured to be compacted and you do not want to
have partitioning based on that key. IMHO, it qualifies more as a header
than a key. What do you think about building records with a specific header
and consumers to execute the logic whether to process or ignore the
messages based on that header value.

Thanks,
Satish.


On Fri, Aug 31, 2018 at 1:32 AM, Satish Duggana 
wrote:

> Hi,
> Thanks for the KIP.
>
> "* For us, the message keys represent some metadata which we use to
> either ignore messages (if a loop-back to the sender), or log some
> information.*"
>
> Above statement was mentioned in the KIP about how key value is used. I
> guess the topic is not configured to be compacted and you do not want to
> have partitioning based on that key. IMHO, it qualifies more as a header
> than a key. What do you think about building records with a specific header
> and consumers to execute the logic whether to process or ignore the
> messages based on that header value.
>
> Thanks,
> Satish.
>
>
> On Fri, Aug 31, 2018 at 12:02 AM, M. Manna  wrote:
>
>> Hi Harsha,
>>
>> thanks for reading the KIP.
>>
>> The intent is to use the DefaultPartitioner logic for round-robin
>> selection
>> of partition regardless of key type.
>>
>> Implementing Partitioner interface isn’t the issue here, you would have to
>> do that anyway if  you are implementing your own. But we also want this to
>> be part of formal codebase.
>>
>> Regards,
>>
>> On Thu, 30 Aug 2018 at 16:58, Harsha  wrote:
>>
>> > Hi,
>> >   Thanks for the KIP. I am trying to understand the intent of the
>> > KIP.  Is the use case you specified can't be achieved by implementing
>> the
>> > Partitioner interface here?
>> > https://github.com/apache/kafka/blob/trunk/clients/src/main/
>> java/org/apache/kafka/clients/producer/Partitioner.java#L28
>> > .
>> > Use your custom partitioner to be configured in your producer clients.
>> >
>> > Thanks,
>> > Harsha
>> >
>> > On Thu, Aug 30, 2018, at 1:45 AM, M. Manna wrote:
>> > > Hello,
>> > >
>> > > I opened a very simple KIP and there exists a JIRA for it.
>> > >
>> > > I would be grateful if any comments are available for action.
>> > >
>> > > Regards,
>> >
>>
>
>


Re: [DISCUSS] KIP-369 Alternative Partitioner to Support "Always Round-Robin" Selection

2018-08-30 Thread Satish Duggana
Hi,
Thanks for the KIP.

"* For us, the message keys represent some metadata which we use to either
ignore messages (if a loop-back to the sender), or log some information.*"

Above statement was mentioned in the KIP about how key value is used. I
guess the topic is not configured to be compacted and you do not want to
have partitioning based on that key. IMHO, it qualifies more as a header
than a key. What do you think about building records with a specific header
and consumers to execute the logic whether to process or ignore the
messages based on that header value.

Thanks,
Satish.


On Fri, Aug 31, 2018 at 12:02 AM, M. Manna  wrote:

> Hi Harsha,
>
> thanks for reading the KIP.
>
> The intent is to use the DefaultPartitioner logic for round-robin selection
> of partition regardless of key type.
>
> Implementing Partitioner interface isn’t the issue here, you would have to
> do that anyway if  you are implementing your own. But we also want this to
> be part of formal codebase.
>
> Regards,
>
> On Thu, 30 Aug 2018 at 16:58, Harsha  wrote:
>
> > Hi,
> >   Thanks for the KIP. I am trying to understand the intent of the
> > KIP.  Is the use case you specified can't be achieved by implementing the
> > Partitioner interface here?
> > https://github.com/apache/kafka/blob/trunk/clients/src/
> main/java/org/apache/kafka/clients/producer/Partitioner.java#L28
> > .
> > Use your custom partitioner to be configured in your producer clients.
> >
> > Thanks,
> > Harsha
> >
> > On Thu, Aug 30, 2018, at 1:45 AM, M. Manna wrote:
> > > Hello,
> > >
> > > I opened a very simple KIP and there exists a JIRA for it.
> > >
> > > I would be grateful if any comments are available for action.
> > >
> > > Regards,
> >
>


Kafka Polling alternate

2018-08-30 Thread Pulkit Manchanda
HI All,
I have a consumer application continuously polling for the record in a
while loop wasting the CPU cycles.
Is there any alternative like I get a callback/event from Kafka sooner the
producer publishes the record to the topic.
Thanks
Pulkit


Re: [DISCUSS] KIP-369 Alternative Partitioner to Support "Always Round-Robin" Selection

2018-08-30 Thread M. Manna
Hi Harsha,

thanks for reading the KIP.

The intent is to use the DefaultPartitioner logic for round-robin selection
of partition regardless of key type.

Implementing Partitioner interface isn’t the issue here, you would have to
do that anyway if  you are implementing your own. But we also want this to
be part of formal codebase.

Regards,

On Thu, 30 Aug 2018 at 16:58, Harsha  wrote:

> Hi,
>   Thanks for the KIP. I am trying to understand the intent of the
> KIP.  Is the use case you specified can't be achieved by implementing the
> Partitioner interface here?
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/Partitioner.java#L28
> .
> Use your custom partitioner to be configured in your producer clients.
>
> Thanks,
> Harsha
>
> On Thu, Aug 30, 2018, at 1:45 AM, M. Manna wrote:
> > Hello,
> >
> > I opened a very simple KIP and there exists a JIRA for it.
> >
> > I would be grateful if any comments are available for action.
> >
> > Regards,
>


Re: [DISCUSS] KIP-369 Alternative Partitioner to Support "Always Round-Robin" Selection

2018-08-30 Thread M. Manna
Hey Bill,

Thanks for reading the KIP, much appreciated.

The reason we want it to be a separate Partitioner is because:

a) We don’t want to specify partition anywhere.

b) we want to be able to reuse what’s been done for NULL key in
DefaultPartitioner.

Using the constructor means we need to assign partition manually in the
code. I’m not sure if it I managed to clarify our need.

Also, this means no change in any existing code. It’s a new class in Kafka
trunk which doesn’t disrupt any exising operation.

Thanks,

On Thu, 30 Aug 2018 at 18:12, Bill Bejeck  wrote:

> Hi,
>
> NOTE: I sent this earlier, but that message just went to the dev list.  I'm
> including both users and dev now.
>
> Thanks for the KIP.
>
> Have you considered using the overloaded ProducerRecord constructor where
> you can specify the partition?   I mention this as an option as I
> encountered the same issue on a previous project and that is how we handled
> round-robin distribution with non-null keys.
>
> Would that suit your needs?
>
> Thanks,
> Bill
>
> On Thu, Aug 30, 2018 at 11:58 AM Harsha  wrote:
>
> > Hi,
> >   Thanks for the KIP. I am trying to understand the intent of the
> > KIP.  Is the use case you specified can't be achieved by implementing the
> > Partitioner interface here?
> >
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/Partitioner.java#L28
> > .
> > Use your custom partitioner to be configured in your producer clients.
> >
> > Thanks,
> > Harsha
> >
> > On Thu, Aug 30, 2018, at 1:45 AM, M. Manna wrote:
> > > Hello,
> > >
> > > I opened a very simple KIP and there exists a JIRA for it.
> > >
> > > I would be grateful if any comments are available for action.
> > >
> > > Regards,
> >
>


Re: Fail to resolve kafka-streams 2.0.0

2018-08-30 Thread Celso Axelrud
I included the following line before the other dependencies lines and it
worked:

libraryDependencies += "javax.ws.rs" % "javax.ws.rs-api" % "2.1" artifacts(
Artifact("javax.ws.rs-api", "jar", "jar"))

Thanks Guozhang

On Thu, Aug 30, 2018 at 12:01 PM Guozhang Wang  wrote:

> I saw the library in maven central:
> https://mvnrepository.com/artifact/javax.ws.rs/javax.ws.rs-api/2.1
>
> Your maven repo seems also have this:
>
> https://repo1.maven.org/maven2/javax/ws/rs/javax.ws.rs-api/2.1/
>
>
> Maybe it is your {packaging.type} variable is not defined?
>
> Guozhang
>
>
> On Thu, Aug 30, 2018 at 9:52 AM, Celso Axelrud  wrote:
>
> > Hi,
> > I get the following message when I try to use kafka-streams 2.0.0
> (failing
> > to download javax.ws.rs-api) in sbt.
> > There is no problem when using kafka-streams 1.1.1.
> > I tried to use scala version 2,11 and 2.12 but I got the same results.
> > I am using WIndows 10.
> >
> > SBT:
> > name := "KafkaProj1"
> > version := "0.1"
> > scalaVersion := "2.12.6"  //scalaVersion := "2.11.12"
> > libraryDependencies += "org.apache.kafka" %% "kafka" % "2.0.0"
> > libraryDependencies += "org.apache.kafka" % "kafka-clients" % "2.0.0"
> > libraryDependencies += "org.apache.kafka" % "kafka-streams" %  "2.0.0"
> > //"1.1.1"
> >
> > MESSAGES:
> > [warn] Detected merged artifact: [FAILED ]
> > javax.ws.rs#javax.ws.rs-api;2.1!javax.ws.rs-api.${packaging.type}
> :
> (0ms).
> > [warn]  local: trie
> > [warn]   C:\Users\inter\.ivy2\local\javax.ws.rs
> > \javax.ws.rs-api\2.1\${packaging.type}s\javax.ws.rs-api.${packaging.type}
> > [warn]  public: tried
> > [warn]
> > https://repo1.maven.org/maven2/javax/ws/rs/javax.ws.
> > rs-api/2.1/javax.ws.rs-api-2.1.${packaging.type}
> > [warn]  local-preloaded-ivy: tried
> > [warn]   C:\Users\inter\.sbt\preloaded\javax.ws.rs
> > \javax.ws.rs-api\2.1\${packaging.type}s\javax.ws.rs-api.${packaging.type}
> > [warn]  local-preloaded: tried
> > [warn]
> > file:/C:/Users/inter/.sbt/preloaded/javax/ws/rs/javax.
> > ws.rs-api/2.1/javax.ws.rs-api-2.1.${packaging.type}
> > [warn] ::
> > [warn] ::  FAILED DOWNLOADS::
> > [warn] :: ^ see resolution messages for details  ^ ::
> > [warn] ::
> > [warn] :: javax.ws.rs#javax.ws.rs-api;2.1!javax.ws.rs-api.${packaging
> .
> > type}
> > [warn] ::
> >
> > Thanks,
> > Celso
> >
>
>
>
> --
> -- Guozhang
>


kafka-run-class.sh kafka.tools.GetOffsetShell with SSL

2018-08-30 Thread HG
Hi,

I am using SSL for broker communications.
Now I want to run to connect to port 9093 with GetOffsetShell.
How can I have the class use the ssl certificates?
Are the any environment variables ?
Can I use a properties file?

Regards Hans


Re: [DISCUSS] KIP-369 Alternative Partitioner to Support "Always Round-Robin" Selection

2018-08-30 Thread Bill Bejeck
Hi,

NOTE: I sent this earlier, but that message just went to the dev list.  I'm
including both users and dev now.

Thanks for the KIP.

Have you considered using the overloaded ProducerRecord constructor where
you can specify the partition?   I mention this as an option as I
encountered the same issue on a previous project and that is how we handled
round-robin distribution with non-null keys.

Would that suit your needs?

Thanks,
Bill

On Thu, Aug 30, 2018 at 11:58 AM Harsha  wrote:

> Hi,
>   Thanks for the KIP. I am trying to understand the intent of the
> KIP.  Is the use case you specified can't be achieved by implementing the
> Partitioner interface here?
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/Partitioner.java#L28
> .
> Use your custom partitioner to be configured in your producer clients.
>
> Thanks,
> Harsha
>
> On Thu, Aug 30, 2018, at 1:45 AM, M. Manna wrote:
> > Hello,
> >
> > I opened a very simple KIP and there exists a JIRA for it.
> >
> > I would be grateful if any comments are available for action.
> >
> > Regards,
>


Re: Fail to resolve kafka-streams 2.0.0

2018-08-30 Thread Guozhang Wang
I saw the library in maven central:
https://mvnrepository.com/artifact/javax.ws.rs/javax.ws.rs-api/2.1

Your maven repo seems also have this:

https://repo1.maven.org/maven2/javax/ws/rs/javax.ws.rs-api/2.1/


Maybe it is your {packaging.type} variable is not defined?

Guozhang


On Thu, Aug 30, 2018 at 9:52 AM, Celso Axelrud  wrote:

> Hi,
> I get the following message when I try to use kafka-streams 2.0.0 (failing
> to download javax.ws.rs-api) in sbt.
> There is no problem when using kafka-streams 1.1.1.
> I tried to use scala version 2,11 and 2.12 but I got the same results.
> I am using WIndows 10.
>
> SBT:
> name := "KafkaProj1"
> version := "0.1"
> scalaVersion := "2.12.6"  //scalaVersion := "2.11.12"
> libraryDependencies += "org.apache.kafka" %% "kafka" % "2.0.0"
> libraryDependencies += "org.apache.kafka" % "kafka-clients" % "2.0.0"
> libraryDependencies += "org.apache.kafka" % "kafka-streams" %  "2.0.0"
> //"1.1.1"
>
> MESSAGES:
> [warn] Detected merged artifact: [FAILED ]
> javax.ws.rs#javax.ws.rs-api;2.1!javax.ws.rs-api.${packaging.type}:  (0ms).
> [warn]  local: trie
> [warn]   C:\Users\inter\.ivy2\local\javax.ws.rs
> \javax.ws.rs-api\2.1\${packaging.type}s\javax.ws.rs-api.${packaging.type}
> [warn]  public: tried
> [warn]
> https://repo1.maven.org/maven2/javax/ws/rs/javax.ws.
> rs-api/2.1/javax.ws.rs-api-2.1.${packaging.type}
> [warn]  local-preloaded-ivy: tried
> [warn]   C:\Users\inter\.sbt\preloaded\javax.ws.rs
> \javax.ws.rs-api\2.1\${packaging.type}s\javax.ws.rs-api.${packaging.type}
> [warn]  local-preloaded: tried
> [warn]
> file:/C:/Users/inter/.sbt/preloaded/javax/ws/rs/javax.
> ws.rs-api/2.1/javax.ws.rs-api-2.1.${packaging.type}
> [warn] ::
> [warn] ::  FAILED DOWNLOADS::
> [warn] :: ^ see resolution messages for details  ^ ::
> [warn] ::
> [warn] :: javax.ws.rs#javax.ws.rs-api;2.1!javax.ws.rs-api.${packaging.
> type}
> [warn] ::
>
> Thanks,
> Celso
>



-- 
-- Guozhang


Fail to resolve kafka-streams 2.0.0

2018-08-30 Thread Celso Axelrud
Hi,
I get the following message when I try to use kafka-streams 2.0.0 (failing
to download javax.ws.rs-api) in sbt.
There is no problem when using kafka-streams 1.1.1.
I tried to use scala version 2,11 and 2.12 but I got the same results.
I am using WIndows 10.

SBT:
name := "KafkaProj1"
version := "0.1"
scalaVersion := "2.12.6"  //scalaVersion := "2.11.12"
libraryDependencies += "org.apache.kafka" %% "kafka" % "2.0.0"
libraryDependencies += "org.apache.kafka" % "kafka-clients" % "2.0.0"
libraryDependencies += "org.apache.kafka" % "kafka-streams" %  "2.0.0"
//"1.1.1"

MESSAGES:
[warn] Detected merged artifact: [FAILED ]
javax.ws.rs#javax.ws.rs-api;2.1!javax.ws.rs-api.${packaging.type}:  (0ms).
[warn]  local: trie
[warn]   C:\Users\inter\.ivy2\local\javax.ws.rs
\javax.ws.rs-api\2.1\${packaging.type}s\javax.ws.rs-api.${packaging.type}
[warn]  public: tried
[warn]
https://repo1.maven.org/maven2/javax/ws/rs/javax.ws.rs-api/2.1/javax.ws.rs-api-2.1.${packaging.type}
[warn]  local-preloaded-ivy: tried
[warn]   C:\Users\inter\.sbt\preloaded\javax.ws.rs
\javax.ws.rs-api\2.1\${packaging.type}s\javax.ws.rs-api.${packaging.type}
[warn]  local-preloaded: tried
[warn]
file:/C:/Users/inter/.sbt/preloaded/javax/ws/rs/javax.ws.rs-api/2.1/javax.ws.rs-api-2.1.${packaging.type}
[warn] ::
[warn] ::  FAILED DOWNLOADS::
[warn] :: ^ see resolution messages for details  ^ ::
[warn] ::
[warn] :: javax.ws.rs#javax.ws.rs-api;2.1!javax.ws.rs-api.${packaging.type}
[warn] ::

Thanks,
Celso


Re: [DISCUSS] KIP-369 Alternative Partitioner to Support "Always Round-Robin" Selection

2018-08-30 Thread Harsha
Hi,
  Thanks for the KIP. I am trying to understand the intent of the KIP.  Is 
the use case you specified can't be achieved by implementing the Partitioner 
interface here? 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/Partitioner.java#L28
 .
Use your custom partitioner to be configured in your producer clients.   

Thanks,
Harsha

On Thu, Aug 30, 2018, at 1:45 AM, M. Manna wrote:
> Hello,
> 
> I opened a very simple KIP and there exists a JIRA for it.
> 
> I would be grateful if any comments are available for action.
> 
> Regards,


Re: Kafka consumer : Group coordinator lookup failed. The coordinator is not available

2018-08-30 Thread M. Manna
What is your poll time for consumers poll() method?



On Thu, 30 Aug 2018, 16:23 Cristian Petroaca, 
wrote:

> Ok, so I mixed things up a little.
> I started with the kafka Server being configured to auto create topics.
> That gave the error.
> But turning the auto create off and creating the topic with AdminUtils
> does not show the error and the consumer actually polls for messages.
> I did not modify the “default.replication.factor” for auto created topics
> and that has as default 1. So I’m not sure why I would see the error in the
> first place?
>
> Even though I don’t see the error anymore and my consumer polls for
> messages, it does not receive any messages. I am waiting a reasonable
> amount of time (1min) after the producer created the messages.
> An independent console consumer connected to the same borker and topic
> does receive them.
> My consumer config does not seem exotic as in to create such a situation.
> Any reason for not receiving messages?
>
> Thanks
>
> On 30/08/2018, 11:25, "Cristian Petroaca" 
> wrote:
>
> Yes.
> In my programmatic env I first create it with:
> AdminUtils.createTopic(zkUtils, topic, 1, 1, new Properties(),
> RackAwareMode.Enforced$.MODULE$);
> So partitions = 1 and replication = 1.
>
> The same for the remote broker, I created the topic –partitions 1
> –replication-factor 1
>
> Are there any other reasons for that error?
>
> On 29/08/2018, 18:06, "M. Manna"  wrote:
>
> Does the topic exist in both your programmatic broker and remote
> broker?
>
>
> Also, are the topic settings same for partitions and replication
> factor?
> GROUP_COORDINATOR_NOT_AVAILABLE is enforced as of 0.11.x if the
> auto-created topic partition/replication-factor setup doesn't match
> with server's config. So you might want to check all these.
>
> Regards,
>
> On Wed, 29 Aug 2018 at 15:55, Cristian Petroaca
>  wrote:
>
> > Tried it, same problem with 9092.
> > By the way, the same consumer works with a remote 1.0.1 Kafka
> broker with
> > the same config.
> > There doesn’t seem to be any networking issues with the embedded
> one since
> > the consumer successfully sends Find Coordinator messages to it
> and the
> > broker responds with Coordinator not found.
> >
> >
> > On 29/08/2018, 17:46, "M. Manna"  wrote:
> >
> > So have you tried binding it to 9092 rather than randomising
> it, and
> > see if
> > that makes any difference?
> >
> > On Wed, 29 Aug 2018 at 15:41, Cristian Petroaca
> >  wrote:
> >
> > > Port = 0 means Kafka will start listening on a random port
> which I
> > need.
> > > I tried it with 5000 but I get the same result.
> > >
> > >
> > > On 29/08/2018, 16:46, "M. Manna" 
> wrote:
> > >
> > > Can you extend the auto.commit.interval.ms to 5000 ?
> and retry?
> > Also,
> > > why
> > > is your port set to 0?
> > >
> > > Regards,
> > >
> > > On Wed, 29 Aug 2018 at 14:25, Cristian Petroaca
> > >  wrote:
> > >
> > > > Hi,
> > > >
> > > > I’m using the Kafka lib with version 2.11_1.0.1.
> > > > I use the KafkaServer.scala class to
> programmatically create a
> > Kafka
> > > > instance and connect it to a programmatically
> created Zookeeper
> > > instance.
> > > > It has the following properties:
> > > > host.name", "127.0.0.1"
> > > > "port", "0"
> > > > "zookeeper.connect", "127.0.0.1:" + zooKeeperPort
> > > > "broker.id", "1"
> > > > auto.create.topics.enable", "true"
> > > > "delete.topic.enable", "true"
> > > >
> > > > I then create a new Kafka Consumer with the following
> > properties:
> > > > bootstrap.servers", “127.0.0.1” + kafkaPort
> > > > "auto.commit.interval.ms", "10"
> > > > “client_id”, “”
> > > > “enable.auto.commit”, “true”
> > > > “auto.commit.interval.ms”, “10”
> > > >
> > > > My problem is that after I subscribe the consumer to
> a custom
> > topic,
> > > the
> > > > consumer just blocks in the .poll() method and I see
> a lot of
> > > messages like:
> > > > “Group coordinator lookup failed: The coordinator is
> not
> > available.”
> > > >
> > > > I read on another forum that a possible problem is
> that the
> > > > _consumer_offse

Re: Kafka consumer : Group coordinator lookup failed. The coordinator is not available

2018-08-30 Thread Cristian Petroaca
Ok, so I mixed things up a little.
I started with the kafka Server being configured to auto create topics. That 
gave the error.
But turning the auto create off and creating the topic with AdminUtils does not 
show the error and the consumer actually polls for messages.
I did not modify the “default.replication.factor” for auto created topics and 
that has as default 1. So I’m not sure why I would see the error in the first 
place?

Even though I don’t see the error anymore and my consumer polls for messages, 
it does not receive any messages. I am waiting a reasonable amount of time 
(1min) after the producer created the messages.
An independent console consumer connected to the same borker and topic does 
receive them.
My consumer config does not seem exotic as in to create such a situation. Any 
reason for not receiving messages?

Thanks

On 30/08/2018, 11:25, "Cristian Petroaca"  wrote:

Yes.
In my programmatic env I first create it with:
AdminUtils.createTopic(zkUtils, topic, 1, 1, new Properties(), 
RackAwareMode.Enforced$.MODULE$);
So partitions = 1 and replication = 1.

The same for the remote broker, I created the topic –partitions 1 
–replication-factor 1

Are there any other reasons for that error?

On 29/08/2018, 18:06, "M. Manna"  wrote:

Does the topic exist in both your programmatic broker and remote broker?


Also, are the topic settings same for partitions and replication factor?
GROUP_COORDINATOR_NOT_AVAILABLE is enforced as of 0.11.x if the
auto-created topic partition/replication-factor setup doesn't match
with server's config. So you might want to check all these.

Regards,

On Wed, 29 Aug 2018 at 15:55, Cristian Petroaca
 wrote:

> Tried it, same problem with 9092.
> By the way, the same consumer works with a remote 1.0.1 Kafka broker 
with
> the same config.
> There doesn’t seem to be any networking issues with the embedded one 
since
> the consumer successfully sends Find Coordinator messages to it and 
the
> broker responds with Coordinator not found.
>
>
> On 29/08/2018, 17:46, "M. Manna"  wrote:
>
> So have you tried binding it to 9092 rather than randomising it, 
and
> see if
> that makes any difference?
>
> On Wed, 29 Aug 2018 at 15:41, Cristian Petroaca
>  wrote:
>
> > Port = 0 means Kafka will start listening on a random port 
which I
> need.
> > I tried it with 5000 but I get the same result.
> >
> >
> > On 29/08/2018, 16:46, "M. Manna"  wrote:
> >
> > Can you extend the auto.commit.interval.ms to 5000 ? and 
retry?
> Also,
> > why
> > is your port set to 0?
> >
> > Regards,
> >
> > On Wed, 29 Aug 2018 at 14:25, Cristian Petroaca
> >  wrote:
> >
> > > Hi,
> > >
> > > I’m using the Kafka lib with version 2.11_1.0.1.
> > > I use the KafkaServer.scala class to programmatically 
create a
> Kafka
> > > instance and connect it to a programmatically created 
Zookeeper
> > instance.
> > > It has the following properties:
> > > host.name", "127.0.0.1"
> > > "port", "0"
> > > "zookeeper.connect", "127.0.0.1:" + zooKeeperPort
> > > "broker.id", "1"
> > > auto.create.topics.enable", "true"
> > > "delete.topic.enable", "true"
> > >
> > > I then create a new Kafka Consumer with the following
> properties:
> > > bootstrap.servers", “127.0.0.1” + kafkaPort
> > > "auto.commit.interval.ms", "10"
> > > “client_id”, “”
> > > “enable.auto.commit”, “true”
> > > “auto.commit.interval.ms”, “10”
> > >
> > > My problem is that after I subscribe the consumer to a 
custom
> topic,
> > the
> > > consumer just blocks in the .poll() method and I see a 
lot of
> > messages like:
> > > “Group coordinator lookup failed: The coordinator is not
> available.”
> > >
> > > I read on another forum that a possible problem is that 
the
> > > _consumer_offsets topic doesn’t exist but that’s not the 
case
> for me.
> > >
> > > Can you suggest a possible root cause?
> > >
> > > Thanks,
> > > Cristian
> > >
> >
> 

Re: Exposing Kafka on WAN

2018-08-30 Thread Andrew Otto
The trouble is that the producer and consumer clients need to discover the
broker hostnames and address the individual brokers directly.  There is an
advertised.listeners setting that will allow you to tell clients to connect
to external proxy hostnames instead of your internal ones, but those
proxies will need to know how to map directly from an advertised hostname
to an internal kafka broker hostname.  You’ll need some logic in your proxy
to do that routing.

P.S.  I’ve not actually set this up before, but this is my understanding :)



On Thu, Aug 30, 2018 at 7:16 AM Dan Markhasin  wrote:

> Usually for such a use case you'd have a physical load balancer box (F5,
> etc.) in front of Kafka that would handle the SSL termination, but it
> should be possible with NGINX as well:
>
>
> https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/
>
> On Fri, 24 Aug 2018 at 18:35, Jack S  wrote:
>
> > Thanks Ryanne.
> >
> > That's one of the options we had considered. I was hoping to keep
> solution
> > simple and efficient. With HTTP proxy, we would have to worry about
> > configurations, scalability, and operation. This is probably true with
> > proxy solution as well, but at least my thinking was that deploying proxy
> > would be more standard with less management effort on our side. Also, we
> > are very familiar with Kafka usual producer/consumer usage, operation,
> etc.
> > and could re-use much of our producer and consumer infrastructure that we
> > currently use internally.
> >
> > Having said that, this is where I was hoping to hear and get feedback
> from
> > community - what people have deployed with such use case and any
> learnings
> > and suggestions.
> >
> > On Fri, Aug 24, 2018 at 7:42 AM Ryanne Dolan 
> > wrote:
> >
> > > Can you use a Kafka HTTP proxy instead of using the Kafka protocol
> > > directly?
> > >
> > > Ryanne
> > >
> > > On Thu, Aug 23, 2018, 7:29 PM Jack S  wrote:
> > >
> > > > Hello,
> > > >
> > > > We have a requirement for opening Kafka on WAN where external
> producers
> > > and
> > > > consumers need to be able to talk to Kafka. I was able to get
> Zookeeper
> > > and
> > > > Kafka working with two way SSL and SASL for authentication and ACL
> for
> > > > authorization.
> > > >
> > > > However, my concern with this approach was opening up Kafka brokers
> > > > directly on WAN and also doing SSL termination. Is there a proxy
> > > solution,
> > > > where proxies live in front of Kafka brokers, so Kafka brokers are
> > still
> > > > hidden and proxies take care of SSL? Has anyone in the community have
> > > > similar use case with Kafka, which is deployed in production? Would
> > love
> > > to
> > > > find out your experience, feedback, or recommendation for this use
> > case.
> > > >
> > > > Thanks in advance.
> > > >
> > > > PS: We are using AWS.
> > > >
> > >
> >
>


Re: Exposing Kafka on WAN

2018-08-30 Thread Dan Markhasin
Usually for such a use case you'd have a physical load balancer box (F5,
etc.) in front of Kafka that would handle the SSL termination, but it
should be possible with NGINX as well:

https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/

On Fri, 24 Aug 2018 at 18:35, Jack S  wrote:

> Thanks Ryanne.
>
> That's one of the options we had considered. I was hoping to keep solution
> simple and efficient. With HTTP proxy, we would have to worry about
> configurations, scalability, and operation. This is probably true with
> proxy solution as well, but at least my thinking was that deploying proxy
> would be more standard with less management effort on our side. Also, we
> are very familiar with Kafka usual producer/consumer usage, operation, etc.
> and could re-use much of our producer and consumer infrastructure that we
> currently use internally.
>
> Having said that, this is where I was hoping to hear and get feedback from
> community - what people have deployed with such use case and any learnings
> and suggestions.
>
> On Fri, Aug 24, 2018 at 7:42 AM Ryanne Dolan 
> wrote:
>
> > Can you use a Kafka HTTP proxy instead of using the Kafka protocol
> > directly?
> >
> > Ryanne
> >
> > On Thu, Aug 23, 2018, 7:29 PM Jack S  wrote:
> >
> > > Hello,
> > >
> > > We have a requirement for opening Kafka on WAN where external producers
> > and
> > > consumers need to be able to talk to Kafka. I was able to get Zookeeper
> > and
> > > Kafka working with two way SSL and SASL for authentication and ACL for
> > > authorization.
> > >
> > > However, my concern with this approach was opening up Kafka brokers
> > > directly on WAN and also doing SSL termination. Is there a proxy
> > solution,
> > > where proxies live in front of Kafka brokers, so Kafka brokers are
> still
> > > hidden and proxies take care of SSL? Has anyone in the community have
> > > similar use case with Kafka, which is deployed in production? Would
> love
> > to
> > > find out your experience, feedback, or recommendation for this use
> case.
> > >
> > > Thanks in advance.
> > >
> > > PS: We are using AWS.
> > >
> >
>


[DISCUSS] KIP-369 Alternative Partitioner to Support "Always Round-Robin" Selection

2018-08-30 Thread M. Manna
Hello,

I opened a very simple KIP and there exists a JIRA for it.

I would be grateful if any comments are available for action.

Regards,


RE: Kafka consumer : Group coordinator lookup failed. The coordinator is not available

2018-08-30 Thread 赖剑清
Hi,

I'm not sure if your broker and consumer work in different server?
May be you can try changing the broker's host.name and the consumer's 
bootstrap.servers to the broker's really ip-address instead of "127.0.0.1"?

>-Original Message-
>From: Cristian Petroaca [mailto:cpetro...@fitbit.com.INVALID]
>Sent: Wednesday, August 29, 2018 9:25 PM
>To: users@kafka.apache.org
>Subject: Kafka consumer : Group coordinator lookup failed. The coordinator is
>not available
>
>Hi,
>
>I’m using the Kafka lib with version 2.11_1.0.1.
>I use the KafkaServer.scala class to programmatically create a Kafka instance
>and connect it to a programmatically created Zookeeper instance. It has the
>following properties:
>host.name", "127.0.0.1"
>"port", "0"
>"zookeeper.connect", "127.0.0.1:" + zooKeeperPort "broker.id", "1"
>auto.create.topics.enable", "true"
>"delete.topic.enable", "true"
>
>I then create a new Kafka Consumer with the following properties:
>bootstrap.servers", “127.0.0.1” + kafkaPort "auto.commit.interval.ms", "10"
>“client_id”, “”
>“enable.auto.commit”, “true”
>“auto.commit.interval.ms”, “10”
>
>My problem is that after I subscribe the consumer to a custom topic, the
>consumer just blocks in the .poll() method and I see a lot of messages like:
>“Group coordinator lookup failed: The coordinator is not available.”
>
>I read on another forum that a possible problem is that the
>_consumer_offsets topic doesn’t exist but that’s not the case for me.
>
>Can you suggest a possible root cause?
>
>Thanks,
>Cristian


Re: Kafka consumer : Group coordinator lookup failed. The coordinator is not available

2018-08-30 Thread Cristian Petroaca
Yes.
In my programmatic env I first create it with:
AdminUtils.createTopic(zkUtils, topic, 1, 1, new Properties(), 
RackAwareMode.Enforced$.MODULE$);
So partitions = 1 and replication = 1.

The same for the remote broker, I created the topic –partitions 1 
–replication-factor 1

Are there any other reasons for that error?

On 29/08/2018, 18:06, "M. Manna"  wrote:

Does the topic exist in both your programmatic broker and remote broker?


Also, are the topic settings same for partitions and replication factor?
GROUP_COORDINATOR_NOT_AVAILABLE is enforced as of 0.11.x if the
auto-created topic partition/replication-factor setup doesn't match
with server's config. So you might want to check all these.

Regards,

On Wed, 29 Aug 2018 at 15:55, Cristian Petroaca
 wrote:

> Tried it, same problem with 9092.
> By the way, the same consumer works with a remote 1.0.1 Kafka broker with
> the same config.
> There doesn’t seem to be any networking issues with the embedded one since
> the consumer successfully sends Find Coordinator messages to it and the
> broker responds with Coordinator not found.
>
>
> On 29/08/2018, 17:46, "M. Manna"  wrote:
>
> So have you tried binding it to 9092 rather than randomising it, and
> see if
> that makes any difference?
>
> On Wed, 29 Aug 2018 at 15:41, Cristian Petroaca
>  wrote:
>
> > Port = 0 means Kafka will start listening on a random port which I
> need.
> > I tried it with 5000 but I get the same result.
> >
> >
> > On 29/08/2018, 16:46, "M. Manna"  wrote:
> >
> > Can you extend the auto.commit.interval.ms to 5000 ? and retry?
> Also,
> > why
> > is your port set to 0?
> >
> > Regards,
> >
> > On Wed, 29 Aug 2018 at 14:25, Cristian Petroaca
> >  wrote:
> >
> > > Hi,
> > >
> > > I’m using the Kafka lib with version 2.11_1.0.1.
> > > I use the KafkaServer.scala class to programmatically create a
> Kafka
> > > instance and connect it to a programmatically created 
Zookeeper
> > instance.
> > > It has the following properties:
> > > host.name", "127.0.0.1"
> > > "port", "0"
> > > "zookeeper.connect", "127.0.0.1:" + zooKeeperPort
> > > "broker.id", "1"
> > > auto.create.topics.enable", "true"
> > > "delete.topic.enable", "true"
> > >
> > > I then create a new Kafka Consumer with the following
> properties:
> > > bootstrap.servers", “127.0.0.1” + kafkaPort
> > > "auto.commit.interval.ms", "10"
> > > “client_id”, “”
> > > “enable.auto.commit”, “true”
> > > “auto.commit.interval.ms”, “10”
> > >
> > > My problem is that after I subscribe the consumer to a custom
> topic,
> > the
> > > consumer just blocks in the .poll() method and I see a lot of
> > messages like:
> > > “Group coordinator lookup failed: The coordinator is not
> available.”
> > >
> > > I read on another forum that a possible problem is that the
> > > _consumer_offsets topic doesn’t exist but that’s not the case
> for me.
> > >
> > > Can you suggest a possible root cause?
> > >
> > > Thanks,
> > > Cristian
> > >
> >
> >
> >
>
>
>