Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Manikumar
Congrats!.  Well deserved.

On Tue, Apr 25, 2017 at 5:54 AM, Matthias J. Sax 
wrote:

> Congrats!
>
> On 4/24/17 4:59 PM, Apurva Mehta wrote:
> > Congratulations Rajini!
> >
> > On Mon, Apr 24, 2017 at 2:06 PM, Gwen Shapira  wrote:
> >
> >> The PMC for Apache Kafka has invited Rajini Sivaram as a committer and
> we
> >> are pleased to announce that she has accepted!
> >>
> >> Rajini contributed 83 patches, 8 KIPs (all security and quota
> >> improvements) and a significant number of reviews. She is also on the
> >> conference committee for Kafka Summit, where she helped select content
> >> for our community event. Through her contributions she's shown good
> >> judgement, good coding skills, willingness to work with the community on
> >> finding the best
> >> solutions and very consistent follow through on her work.
> >>
> >> Thank you for your contributions, Rajini! Looking forward to many more
> :)
> >>
> >> Gwen, for the Apache Kafka PMC
> >>
> >
>
>


Re: Recording - Storm & Kafka Meetup on April 20th 2017

2017-04-24 Thread Hugo Da Cruz Louro
Hi,

In the comments section of the Meetup page you can find links for all the 
slides.

https://www.meetup.com/Apache-Storm-Apache-Kafka/events/238975416/

Thank you for your interest.
Hugo

On Apr 24, 2017, at 10:19 AM, Aditya Desai 
> wrote:

Hi Harsha and others

Really thanks a lot for this meetup. I have signed up and will surely
attend it from June. I am really new to Apache Storm and Apache Kafka. I am
still learning them and these meetups will definitely help people like me.
If you have any of you have some good resources/tutorials/documents/git hub
repo to learn more about Storm and Kafka, please do share. The video of
this meetup is really inspiring and informative.

Regards

On Mon, Apr 24, 2017 at 8:28 AM, Harsha Chintalapani 
>
wrote:

Hi Aditya,
Thanks for your interest. We entatively planning one in June
1st week. If you haven't already please register here
https://www.meetup.com/Apache-Storm-Apache-Kafka/

. I'll keep the Storm lists updated once we finalize the date & location.

Thanks,
Harsha

On Mon, Apr 24, 2017 at 7:02 AM Aditya Desai 
> wrote:

Hello Everyone

Can you please let us know when is the next meet up? It would be great if
we can have in May.

Regards
Aditya Desai

On Mon, Apr 24, 2017 at 2:16 AM, Xin Wang 
> wrote:

How about publishing this to Storm site?

- Xin

2017-04-22 19:27 GMT+08:00 steve tueno 
>:

great

Thanks



Cordialement,

TUENO FOTSO Steve Jeffrey
Ingénieur de conception
Génie Informatique
+237 676 57 17 28 <+237%206%2076%2057%2017%2028>
+237 697 86 36 38 <+237%206%2097%2086%2036%2038>

+33 6 23 71 91 52 <+33%206%2023%2071%2091%2052>


https://jobs.jumia.cm/fr/candidats/CVTF1486563.html


__

https://play.google.com/store/apps/details?id=com.polytech.
remotecomputer

https://play.google.com/store/apps/details?id=com.polytech.
internetaccesschecker

*http://www.traveler.cm/
*
http://remotecomputer.traveler.cm/

https://play.google.com/store/apps/details?id=com.polytech.
androidsmssender


https://github.com/stuenofotso/notre-jargon

https://play.google.com/store/apps/details?id=com.polytech.
welovecameroon

https://play.google.com/store/apps/details?id=com.polytech.welovefrance

Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Apurva Mehta
Congratulations Rajini!

On Mon, Apr 24, 2017 at 2:06 PM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
> are pleased to announce that she has accepted!
>
> Rajini contributed 83 patches, 8 KIPs (all security and quota
> improvements) and a significant number of reviews. She is also on the
> conference committee for Kafka Summit, where she helped select content
> for our community event. Through her contributions she's shown good
> judgement, good coding skills, willingness to work with the community on
> finding the best
> solutions and very consistent follow through on her work.
>
> Thank you for your contributions, Rajini! Looking forward to many more :)
>
> Gwen, for the Apache Kafka PMC
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Becket Qin
Congratulations! Rajini! Great work!

On Mon, Apr 24, 2017 at 3:33 PM, Jason Gustafson  wrote:

> Woohoo! Great work, Rajini!
>
> On Mon, Apr 24, 2017 at 3:27 PM, Jun Rao  wrote:
>
> > Congratulations, Rajini ! Thanks for all your contributions.
> >
> > Jun
> >
> > On Mon, Apr 24, 2017 at 2:06 PM, Gwen Shapira  wrote:
> >
> > > The PMC for Apache Kafka has invited Rajini Sivaram as a committer and
> we
> > > are pleased to announce that she has accepted!
> > >
> > > Rajini contributed 83 patches, 8 KIPs (all security and quota
> > > improvements) and a significant number of reviews. She is also on the
> > > conference committee for Kafka Summit, where she helped select content
> > > for our community event. Through her contributions she's shown good
> > > judgement, good coding skills, willingness to work with the community
> on
> > > finding the best
> > > solutions and very consistent follow through on her work.
> > >
> > > Thank you for your contributions, Rajini! Looking forward to many more
> :)
> > >
> > > Gwen, for the Apache Kafka PMC
> > >
> >
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Jason Gustafson
Woohoo! Great work, Rajini!

On Mon, Apr 24, 2017 at 3:27 PM, Jun Rao  wrote:

> Congratulations, Rajini ! Thanks for all your contributions.
>
> Jun
>
> On Mon, Apr 24, 2017 at 2:06 PM, Gwen Shapira  wrote:
>
> > The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
> > are pleased to announce that she has accepted!
> >
> > Rajini contributed 83 patches, 8 KIPs (all security and quota
> > improvements) and a significant number of reviews. She is also on the
> > conference committee for Kafka Summit, where she helped select content
> > for our community event. Through her contributions she's shown good
> > judgement, good coding skills, willingness to work with the community on
> > finding the best
> > solutions and very consistent follow through on her work.
> >
> > Thank you for your contributions, Rajini! Looking forward to many more :)
> >
> > Gwen, for the Apache Kafka PMC
> >
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Jun Rao
Congratulations, Rajini ! Thanks for all your contributions.

Jun

On Mon, Apr 24, 2017 at 2:06 PM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
> are pleased to announce that she has accepted!
>
> Rajini contributed 83 patches, 8 KIPs (all security and quota
> improvements) and a significant number of reviews. She is also on the
> conference committee for Kafka Summit, where she helped select content
> for our community event. Through her contributions she's shown good
> judgement, good coding skills, willingness to work with the community on
> finding the best
> solutions and very consistent follow through on her work.
>
> Thank you for your contributions, Rajini! Looking forward to many more :)
>
> Gwen, for the Apache Kafka PMC
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Michael Noll
Congratulations, Rajini!

On Mon, Apr 24, 2017 at 11:50 PM, Ismael Juma  wrote:

> Congrats Rajini! Well-deserved. :)
>
> Ismael
>
> On Mon, Apr 24, 2017 at 10:06 PM, Gwen Shapira  wrote:
>
> > The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
> > are pleased to announce that she has accepted!
> >
> > Rajini contributed 83 patches, 8 KIPs (all security and quota
> > improvements) and a significant number of reviews. She is also on the
> > conference committee for Kafka Summit, where she helped select content
> > for our community event. Through her contributions she's shown good
> > judgement, good coding skills, willingness to work with the community on
> > finding the best
> > solutions and very consistent follow through on her work.
> >
> > Thank you for your contributions, Rajini! Looking forward to many more :)
> >
> > Gwen, for the Apache Kafka PMC
> >
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Ismael Juma
Congrats Rajini! Well-deserved. :)

Ismael

On Mon, Apr 24, 2017 at 10:06 PM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
> are pleased to announce that she has accepted!
>
> Rajini contributed 83 patches, 8 KIPs (all security and quota
> improvements) and a significant number of reviews. She is also on the
> conference committee for Kafka Summit, where she helped select content
> for our community event. Through her contributions she's shown good
> judgement, good coding skills, willingness to work with the community on
> finding the best
> solutions and very consistent follow through on her work.
>
> Thank you for your contributions, Rajini! Looking forward to many more :)
>
> Gwen, for the Apache Kafka PMC
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Sriram Subramanian
Congrats Rajini!

On Mon, Apr 24, 2017 at 2:21 PM, Onur Karaman 
wrote:

> Congrats!
>
> On Mon, Apr 24, 2017 at 2:20 PM, Guozhang Wang  wrote:
>
> > Congrats Rajini!
> >
> > Guozhang
> >
> > On Mon, Apr 24, 2017 at 2:08 PM, Vahid S Hashemian <
> > vahidhashem...@us.ibm.com> wrote:
> >
> > > Great news.
> > >
> > > Congrats Rajini!
> > >
> > > --Vahid
> > >
> > >
> > >
> > >
> > > From:   Gwen Shapira 
> > > To: d...@kafka.apache.org, Users ,
> > > priv...@kafka.apache.org
> > > Date:   04/24/2017 02:06 PM
> > > Subject:[ANNOUNCE] New committer: Rajini Sivaram
> > >
> > >
> > >
> > > The PMC for Apache Kafka has invited Rajini Sivaram as a committer and
> we
> > > are pleased to announce that she has accepted!
> > >
> > > Rajini contributed 83 patches, 8 KIPs (all security and quota
> > > improvements) and a significant number of reviews. She is also on the
> > > conference committee for Kafka Summit, where she helped select content
> > > for our community event. Through her contributions she's shown good
> > > judgement, good coding skills, willingness to work with the community
> on
> > > finding the best
> > > solutions and very consistent follow through on her work.
> > >
> > > Thank you for your contributions, Rajini! Looking forward to many more
> :)
> > >
> > > Gwen, for the Apache Kafka PMC
> > >
> > >
> > >
> > >
> > >
> >
> >
> > --
> > -- Guozhang
> >
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Onur Karaman
Congrats!

On Mon, Apr 24, 2017 at 2:20 PM, Guozhang Wang  wrote:

> Congrats Rajini!
>
> Guozhang
>
> On Mon, Apr 24, 2017 at 2:08 PM, Vahid S Hashemian <
> vahidhashem...@us.ibm.com> wrote:
>
> > Great news.
> >
> > Congrats Rajini!
> >
> > --Vahid
> >
> >
> >
> >
> > From:   Gwen Shapira 
> > To: d...@kafka.apache.org, Users ,
> > priv...@kafka.apache.org
> > Date:   04/24/2017 02:06 PM
> > Subject:[ANNOUNCE] New committer: Rajini Sivaram
> >
> >
> >
> > The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
> > are pleased to announce that she has accepted!
> >
> > Rajini contributed 83 patches, 8 KIPs (all security and quota
> > improvements) and a significant number of reviews. She is also on the
> > conference committee for Kafka Summit, where she helped select content
> > for our community event. Through her contributions she's shown good
> > judgement, good coding skills, willingness to work with the community on
> > finding the best
> > solutions and very consistent follow through on her work.
> >
> > Thank you for your contributions, Rajini! Looking forward to many more :)
> >
> > Gwen, for the Apache Kafka PMC
> >
> >
> >
> >
> >
>
>
> --
> -- Guozhang
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Guozhang Wang
Congrats Rajini!

Guozhang

On Mon, Apr 24, 2017 at 2:08 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Great news.
>
> Congrats Rajini!
>
> --Vahid
>
>
>
>
> From:   Gwen Shapira 
> To: d...@kafka.apache.org, Users ,
> priv...@kafka.apache.org
> Date:   04/24/2017 02:06 PM
> Subject:[ANNOUNCE] New committer: Rajini Sivaram
>
>
>
> The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
> are pleased to announce that she has accepted!
>
> Rajini contributed 83 patches, 8 KIPs (all security and quota
> improvements) and a significant number of reviews. She is also on the
> conference committee for Kafka Summit, where she helped select content
> for our community event. Through her contributions she's shown good
> judgement, good coding skills, willingness to work with the community on
> finding the best
> solutions and very consistent follow through on her work.
>
> Thank you for your contributions, Rajini! Looking forward to many more :)
>
> Gwen, for the Apache Kafka PMC
>
>
>
>
>


-- 
-- Guozhang


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Jay Kreps
Congrats Rajini!

On Mon, Apr 24, 2017 at 2:06 PM, Gwen Shapira  wrote:

> The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
> are pleased to announce that she has accepted!
>
> Rajini contributed 83 patches, 8 KIPs (all security and quota
> improvements) and a significant number of reviews. She is also on the
> conference committee for Kafka Summit, where she helped select content
> for our community event. Through her contributions she's shown good
> judgement, good coding skills, willingness to work with the community on
> finding the best
> solutions and very consistent follow through on her work.
>
> Thank you for your contributions, Rajini! Looking forward to many more :)
>
> Gwen, for the Apache Kafka PMC
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Vahid S Hashemian
Great news.

Congrats Rajini!

--Vahid




From:   Gwen Shapira 
To: d...@kafka.apache.org, Users , 
priv...@kafka.apache.org
Date:   04/24/2017 02:06 PM
Subject:[ANNOUNCE] New committer: Rajini Sivaram



The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
are pleased to announce that she has accepted!

Rajini contributed 83 patches, 8 KIPs (all security and quota
improvements) and a significant number of reviews. She is also on the
conference committee for Kafka Summit, where she helped select content
for our community event. Through her contributions she's shown good
judgement, good coding skills, willingness to work with the community on
finding the best
solutions and very consistent follow through on her work.

Thank you for your contributions, Rajini! Looking forward to many more :)

Gwen, for the Apache Kafka PMC






[ANNOUNCE] New committer: Rajini Sivaram

2017-04-24 Thread Gwen Shapira
The PMC for Apache Kafka has invited Rajini Sivaram as a committer and we
are pleased to announce that she has accepted!

Rajini contributed 83 patches, 8 KIPs (all security and quota
improvements) and a significant number of reviews. She is also on the
conference committee for Kafka Summit, where she helped select content
for our community event. Through her contributions she's shown good
judgement, good coding skills, willingness to work with the community on
finding the best
solutions and very consistent follow through on her work.

Thank you for your contributions, Rajini! Looking forward to many more :)

Gwen, for the Apache Kafka PMC


Re: Recording - Storm & Kafka Meetup on April 20th 2017

2017-04-24 Thread Aditya Desai
Hi Harsha and others

Really thanks a lot for this meetup. I have signed up and will surely
attend it from June. I am really new to Apache Storm and Apache Kafka. I am
still learning them and these meetups will definitely help people like me.
If you have any of you have some good resources/tutorials/documents/git hub
repo to learn more about Storm and Kafka, please do share. The video of
this meetup is really inspiring and informative.

Regards

On Mon, Apr 24, 2017 at 8:28 AM, Harsha Chintalapani 
wrote:

> Hi Aditya,
>  Thanks for your interest. We entatively planning one in June
> 1st week. If you haven't already please register here
> https://www.meetup.com/Apache-Storm-Apache-Kafka/
> 
> . I'll keep the Storm lists updated once we finalize the date & location.
>
> Thanks,
> Harsha
>
> On Mon, Apr 24, 2017 at 7:02 AM Aditya Desai  wrote:
>
>> Hello Everyone
>>
>> Can you please let us know when is the next meet up? It would be great if
>> we can have in May.
>>
>> Regards
>> Aditya Desai
>>
>> On Mon, Apr 24, 2017 at 2:16 AM, Xin Wang  wrote:
>>
>>> How about publishing this to Storm site?
>>>
>>>  - Xin
>>>
>>> 2017-04-22 19:27 GMT+08:00 steve tueno :
>>>
 great

 Thanks



 Cordialement,

 TUENO FOTSO Steve Jeffrey
 Ingénieur de conception
 Génie Informatique
 +237 676 57 17 28 <+237%206%2076%2057%2017%2028>
 +237 697 86 36 38 <+237%206%2097%2086%2036%2038>

 +33 6 23 71 91 52 <+33%206%2023%2071%2091%2052>


 https://jobs.jumia.cm/fr/candidats/CVTF1486563.html
 
 
 __

 https://play.google.com/store/apps/details?id=com.polytech.
 remotecomputer
 
 https://play.google.com/store/apps/details?id=com.polytech.
 internetaccesschecker
 
 *http://www.traveler.cm/
 *
 http://remotecomputer.traveler.cm/
 
 https://play.google.com/store/apps/details?id=com.polytech.
 androidsmssender
 

 https://github.com/stuenofotso/notre-jargon
 
 https://play.google.com/store/apps/details?id=com.polytech.
 welovecameroon
 
 https://play.google.com/store/apps/details?id=com.polytech.welovefrance
 

Re: Consumer with another group.id conflicts with streams()

2017-04-24 Thread Matthias J. Sax
Hi,

hard to diagnose. The new consumer should not affect the Streams app
though -- even if I am wondering why you need it.

> KafkaConsumer (with a UUID as group.id) that reads some historical data from 
> input topic

Maybe using GlobalKTable instead might be a better solution?

> (i.e. I feed 1000 records into source topic and receive around 200 on the 
> target topic)

Are this the first 200 records? Or are those 200 record "random ones"
from your input topic? How many partitions do you have for input/output
topic?

> looks like a lot of rebalancing happens.

We recommend to change StreamsConfig values as follows to improve in
rebalance behavior:

> props.put(ProducerConfig.RETRIES_CONFIG, 10);  < increase to 10 from 
> default of 0
> props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 
> Integer.toString(Integer.MAX_VALUE)); <- increase to infinity from 
> default of 300 s

We will change the default values accordingly in future release but for
now you should set this manually.


Hope this helps.


-Matthias

On 4/24/17 10:01 AM, Andreas Voss wrote:
> Hi, I have a simple streaming app that copies data from one topic to another, 
> so when I feed 1000 records into source topic I receive 1000 records in the 
> target topic. Also the app contains a transform() step which does nothing, 
> except instantiating a KafkaConsumer (with a UUID as group.id) that reads 
> some historical data from input topic. As soon as this consumer is in place, 
> the streaming app does not work anymore, records get lost (i.e. I feed 1000 
> records into source topic and receive around 200 on the target topic) and 
> it's terribly slow - looks like a lot of rebalancing happens.
> 
> To me this looks like a bug, because the KStreamBuilder uses the application 
> id as group.id ("kafka-smurfing" in this case), and the transformer uses a 
> different one (uuid).
> 
> Full source code:
> 
> public class Main {
> 
>   public static final String BOOTSTRAP_SERVERS = "192.168.99.100:9092";
>   public static final String SOURCE_TOPIC = "transactions";
>   public static final String TARGET_TOPIC =  "alerts";
> 
>   public static void main(String[] args) throws Exception {
> 
> KStreamBuilder builder = new KStreamBuilder();
> builder.stream(Serdes.String(), Serdes.String(), SOURCE_TOPIC)
>.transform(() -> new DebugTransformer())
>.to(Serdes.String(), Serdes.String(), TARGET_TOPIC);
> 
> Properties props = new Properties();
> props.put(StreamsConfig.APPLICATION_ID_CONFIG, "kafka-smurfing");
> props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, Main.BOOTSTRAP_SERVERS);
> props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, 
> Serdes.String().getClass().getName());
> props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, 
> Serdes.String().getClass().getName());
> props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 2);
> props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
> 
> KafkaStreams streams = new KafkaStreams(builder, props);
> streams.start();
> Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
> 
>   }
> 
> }
> 
> public class DebugTransformer implements Transformer KeyValue> {
> 
>   private KafkaConsumer consumer;
>   private ProcessorContext context;
> 
>   @Override
>   public void init(ProcessorContext context) {
> this.context = context;
> Properties props = new Properties();
> props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, 
> Main.BOOTSTRAP_SERVERS);
> props.put(ConsumerConfig.GROUP_ID_CONFIG, UUID.randomUUID().toString());
> props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "none");
> props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
> props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
> StringDeserializer.class.getName());
> props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
> StringDeserializer.class.getName());
> consumer = new KafkaConsumer<>(props);
>   }
> 
>   @Override
>   public KeyValue transform(String key, String value) {
> TopicPartition partition = new TopicPartition(Main.SOURCE_TOPIC, 
> context.partition());
> consumer.assign(Arrays.asList(partition));
> consumer.seek(partition, 0);
> consumer.poll(100);
> return KeyValue.pair(key, value);
>   }
> 
>   @Override
>   public void close() {
> consumer.close();
>   }
> 
>   @Override
>   public KeyValue punctuate(long timestamp) {
> return null;
>   }
> 
> }
> 
> Thanks for any hints,
> Andreas
> 
> This email and any files transmitted with it are confidential, proprietary 
> and intended solely for the individual or entity to whom they are addressed. 
> If you have received this email in error please delete it immediately.
> 



signature.asc
Description: OpenPGP digital signature


Re: Unable to consume from remote

2017-04-24 Thread Matthias J. Sax
Can you read/write your topics using

> bin/kafka-console-[consumer|producer].sh

If no, it's a general configuration problem. If yes, the problem is in
your code.

I did not open the link for security concerns... Hope you understand
that. If you want to share code, pleas c __relevant__ parts into your
email.


-Matthias


On 4/24/17 6:01 AM, Saurabh Raje wrote:
> Hi Guys, I created the setup of Kafka ( kafka_2.12-0.10.2.0.tgz
> )
> as per this document https://kafka.apache.org/quickstart. The setup
> works when both producer & consumer are in localhost. The moment I
> deploy consumer to a remote machine no messages are received. The
> network connectivity is fine (telnet works). What do you think is going
> on here...?
> 
> Regards,
> Saurabh Raje
> 



signature.asc
Description: OpenPGP digital signature


Consumer with another group.id conflicts with streams()

2017-04-24 Thread Andreas Voss
Hi, I have a simple streaming app that copies data from one topic to another, 
so when I feed 1000 records into source topic I receive 1000 records in the 
target topic. Also the app contains a transform() step which does nothing, 
except instantiating a KafkaConsumer (with a UUID as group.id) that reads some 
historical data from input topic. As soon as this consumer is in place, the 
streaming app does not work anymore, records get lost (i.e. I feed 1000 records 
into source topic and receive around 200 on the target topic) and it's terribly 
slow - looks like a lot of rebalancing happens.

To me this looks like a bug, because the KStreamBuilder uses the application id 
as group.id ("kafka-smurfing" in this case), and the transformer uses a 
different one (uuid).

Full source code:

public class Main {

  public static final String BOOTSTRAP_SERVERS = "192.168.99.100:9092";
  public static final String SOURCE_TOPIC = "transactions";
  public static final String TARGET_TOPIC =  "alerts";

  public static void main(String[] args) throws Exception {

KStreamBuilder builder = new KStreamBuilder();
builder.stream(Serdes.String(), Serdes.String(), SOURCE_TOPIC)
   .transform(() -> new DebugTransformer())
   .to(Serdes.String(), Serdes.String(), TARGET_TOPIC);

Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "kafka-smurfing");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, Main.BOOTSTRAP_SERVERS);
props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, 
Serdes.String().getClass().getName());
props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, 
Serdes.String().getClass().getName());
props.put(StreamsConfig.NUM_STREAM_THREADS_CONFIG, 2);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

KafkaStreams streams = new KafkaStreams(builder, props);
streams.start();
Runtime.getRuntime().addShutdownHook(new Thread(streams::close));

  }

}

public class DebugTransformer implements Transformer> {

  private KafkaConsumer consumer;
  private ProcessorContext context;

  @Override
  public void init(ProcessorContext context) {
this.context = context;
Properties props = new Properties();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, Main.BOOTSTRAP_SERVERS);
props.put(ConsumerConfig.GROUP_ID_CONFIG, UUID.randomUUID().toString());
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "none");
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
StringDeserializer.class.getName());
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
StringDeserializer.class.getName());
consumer = new KafkaConsumer<>(props);
  }

  @Override
  public KeyValue transform(String key, String value) {
TopicPartition partition = new TopicPartition(Main.SOURCE_TOPIC, 
context.partition());
consumer.assign(Arrays.asList(partition));
consumer.seek(partition, 0);
consumer.poll(100);
return KeyValue.pair(key, value);
  }

  @Override
  public void close() {
consumer.close();
  }

  @Override
  public KeyValue punctuate(long timestamp) {
return null;
  }

}

Thanks for any hints,
Andreas

This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.


Re: How does replication affect kafka quota?

2017-04-24 Thread Hans Jespersen

Replication will not effect the users quota as it is done under a different 
replication quota (which you can control separately). The user should still see 
a 50 MBps maximum rate enforced into each broker.

-hans



> On Apr 23, 2017, at 11:39 PM, Archie  wrote:
> 
> So by specifying a kafka quota for a user as 50 MBps, I can make sure it
> can write on a partition at broker X (imagine this user has only 1
> partition at this broker) at a max rate of 50 MBps. Now if the partition
> has a replica on another broker Y, will the user still be able to write
> data at rate 50MBps? Or will replicas slow down the user's write rate?
> 
> 
> Thanks,
> Archie



Re: Recording - Storm & Kafka Meetup on April 20th 2017

2017-04-24 Thread Harsha Chintalapani
Hi Aditya,
 Thanks for your interest. We entatively planning one in June
1st week. If you haven't already please register here
https://www.meetup.com/Apache-Storm-Apache-Kafka/ . I'll keep the Storm
lists updated once we finalize the date & location.

Thanks,
Harsha

On Mon, Apr 24, 2017 at 7:02 AM Aditya Desai  wrote:

> Hello Everyone
>
> Can you please let us know when is the next meet up? It would be great if
> we can have in May.
>
> Regards
> Aditya Desai
>
> On Mon, Apr 24, 2017 at 2:16 AM, Xin Wang  wrote:
>
>> How about publishing this to Storm site?
>>
>>  - Xin
>>
>> 2017-04-22 19:27 GMT+08:00 steve tueno :
>>
>>> great
>>>
>>> Thanks
>>>
>>>
>>>
>>> Cordialement,
>>>
>>> TUENO FOTSO Steve Jeffrey
>>> Ingénieur de conception
>>> Génie Informatique
>>> +237 676 57 17 28 <+237%206%2076%2057%2017%2028>
>>> +237 697 86 36 38 <+237%206%2097%2086%2036%2038>
>>>
>>> +33 6 23 71 91 52 <+33%206%2023%2071%2091%2052>
>>>
>>>
>>> https://jobs.jumia.cm/fr/candidats/CVTF1486563.html
>>> 
>>>
>>> __
>>>
>>> https://play.google.com/store/apps/details?id=com.polytech.remotecomputer
>>> 
>>>
>>> https://play.google.com/store/apps/details?id=com.polytech.internetaccesschecker
>>> 
>>> *http://www.traveler.cm/
>>> *
>>> http://remotecomputer.traveler.cm/
>>> 
>>>
>>> https://play.google.com/store/apps/details?id=com.polytech.androidsmssender
>>> 
>>>
>>> https://github.com/stuenofotso/notre-jargon
>>> 
>>> https://play.google.com/store/apps/details?id=com.polytech.welovecameroon
>>> 
>>> https://play.google.com/store/apps/details?id=com.polytech.welovefrance
>>> 
>>>
>>>
>>>
>>> 2017-04-22 3:08 GMT+02:00 Roshan Naik :
>>>
 It was a great meetup and for the benefit of those interested but
 unable to attend it, here is a link to the recording :



 https://www.youtube.com/watch?v=kCRv6iEd7Ow
 




 List of Talks:

 -  *Introduction* –   Suresh Srinivas (Hortonworks)

 -  [4m:31sec] –  *Overview of  Storm 1.1* -  Hugo Louro
 (Hortonworks)

 -  [20m] –  *Rethinking the Storm 2.0 Worker*  - 

Re: Stream applications dying on broker ISR change

2017-04-24 Thread Eno Thereska
Hi Sachin,

In KIP-62 a background heartbeat thread was introduced to deal with the group 
protocol arrivals and departures. There is a setting called session.timeout.ms 
that specifies the timeout of that background thread. So if the thread has died 
that background thread will also die and the right thing will happen.

Eno

> On 24 Apr 2017, at 15:34, Sachin Mittal  wrote:
> 
> I had a question about this setting
> ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, Integer.toString(Integer.MAX_
> VALUE)
> 
> How would the broker know if a thread has died or say we simply stopped an
> instance and needs to be booted out of the group.
> 
> Thanks
> Sachin
> 
> 
> On Mon, Apr 24, 2017 at 5:55 PM, Eno Thereska 
> wrote:
> 
>> Hi Ian,
>> 
>> 
>> This is now fixed in 0.10.2.1. The default configuration need tweaking. If
>> you can't pick that up (it's currently being voted), make sure you have
>> these two parameters set as follows in your streams config:
>> 
>> final Properties props = new Properties();
>> ...
>> props.put(ProducerConfig.RETRIES_CONFIG, 10);  < increase to 10 from
>> default of 0
>> props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG,
>> Integer.toString(Integer.MAX_VALUE)); <- increase to infinity
>> from default of 300 s
>> 
>> Thanks
>> Eno
>> 
>>> On 24 Apr 2017, at 10:38, Ian Duffy  wrote:
>>> 
>>> Hi All,
>>> 
>>> We're running multiple Kafka Stream applications using Kafka client
>>> 0.10.2.0 against a 6 node broker cluster running 0.10.1.1
>>> Additionally, we're running Kafka Connect 0.10.2.0 with the ElasticSearch
>>> connector by confluent [1]
>>> 
>>> On an ISR change occurring on the brokers, all of the streams
>> applications
>>> and the Kafka connect ES connector threw exceptions and never recovered.
>>> 
>>> We've seen a correlation between Kafka Broker ISR change and stream
>>> applications dying.
>>> 
>>> The logs from the streams applications throw out the following and fail
>> to
>>> recover:
>>> 
>>> 07:01:23.323 stream-processor /var/log/application.log  2017-04-24
>>> 06:01:23,323 - [WARN] - [1.1.0-6] - [StreamThread-1]
>>> o.a.k.s.p.internals.StreamThread - Unexpected state transition from
>> RUNNING
>>> to NOT_RUNNING
>>> 07:01:23.323 stream-processor /var/log/application.log  2017-04-24
>>> 06:01:23,324 - [ERROR] - [1.1.0-6] - [StreamThread-1] Application -
>>> Unexpected Exception caught in thread [StreamThread-1]:
>>> org.apache.kafka.streams.errors.StreamsException: Exception caught in
>>> process. taskId=0_81, processor=KSTREAM-SOURCE-00,
>>> topic=kafka-topic, partition=81, offset=479285
>>> at
>>> org.apache.kafka.streams.processor.internals.
>> StreamTask.process(StreamTask.java:216)
>>> at
>>> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(
>> StreamThread.java:641)
>>> at
>>> org.apache.kafka.streams.processor.internals.
>> StreamThread.run(StreamThread.java:368)
>>> Caused by: org.apache.kafka.streams.errors.StreamsException: task [0_81]
>>> exception caught when producing
>>> at
>>> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.
>> checkForException(RecordCollectorImpl.java:119)
>>> at
>>> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(
>> RecordCollectorImpl.java:76)
>>> at
>>> org.apache.kafka.streams.processor.internals.SinkNode.
>> process(SinkNode.java:79)
>>> at
>>> org.apache.kafka.streams.processor.internals.
>> ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
>>> at
>>> org.apache.kafka.streams.kstream.internals.KStreamFlatMap$
>> KStreamFlatMapProcessor.process(KStreamFlatMap.java:43)
>>> at
>>> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(
>> ProcessorNode.java:48)
>>> at
>>> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.
>> measureLatencyNs(StreamsMetricsImpl.java:188)
>>> at
>>> org.apache.kafka.streams.processor.internals.ProcessorNode.process(
>> ProcessorNode.java:134)
>>> at
>>> org.apache.kafka.streams.processor.internals.
>> ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
>>> at
>>> org.apache.kafka.streams.processor.internals.
>> SourceNode.process(SourceNode.java:70)
>>> at
>>> org.apache.kafka.streams.processor.internals.
>> StreamTask.process(StreamTask.java:197)
>>> ... 2 common frames omitted
>>> Caused by: org.apache.kafka.common.errors.NotLeaderForPartitionException
>> :
>>> This server is not the leader for that topic-partition.
>>> 07:01:23.558 stream-processor /var/log/application.log  2017-04-24
>>> 06:01:23,558 - [WARN] - [1.1.0-6] - [StreamThread-3]
>>> o.a.k.s.p.internals.StreamThread - Unexpected state transition from
>> RUNNING
>>> to NOT_RUNNING
>>> 07:01:23.558 stream-processor /var/log/application.log  2017-04-24
>>> 06:01:23,559 - [ERROR] - [1.1.0-6] - [StreamThread-3] Application -
>>> Unexpected Exception caught in thread [StreamThread-3]:
>>> org.apache.kafka.streams.errors.StreamsException: Exception caught in
>>> 

handling failed offsets

2017-04-24 Thread Roshni Thomas
We are using kafka for auditing services. We have a scenario where in there is 
bulk data and one in a list can fail. What we require is, we want to keep a 
queue of failed offsets and retry only the failed offsets. We tried to revert 
to the previous offset in case of failure, but that has either of the two 
issues:


1)   Either the offset stays at the last successful one, and never 
processes the future offsets.

2)  All the messages after the failed one is tried again, even if they were 
processed before.

Basically is there a way in which we maintain a failed offset queue in kafka 
itself and retry that queue at preset intervals.

Thanks,
Roshni Thomas


Re: Stream applications dying on broker ISR change

2017-04-24 Thread Sachin Mittal
I had a question about this setting
ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, Integer.toString(Integer.MAX_
VALUE)

How would the broker know if a thread has died or say we simply stopped an
instance and needs to be booted out of the group.

Thanks
Sachin


On Mon, Apr 24, 2017 at 5:55 PM, Eno Thereska 
wrote:

> Hi Ian,
>
>
> This is now fixed in 0.10.2.1. The default configuration need tweaking. If
> you can't pick that up (it's currently being voted), make sure you have
> these two parameters set as follows in your streams config:
>
> final Properties props = new Properties();
> ...
> props.put(ProducerConfig.RETRIES_CONFIG, 10);  < increase to 10 from
> default of 0
> props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG,
> Integer.toString(Integer.MAX_VALUE)); <- increase to infinity
> from default of 300 s
>
> Thanks
> Eno
>
> > On 24 Apr 2017, at 10:38, Ian Duffy  wrote:
> >
> > Hi All,
> >
> > We're running multiple Kafka Stream applications using Kafka client
> > 0.10.2.0 against a 6 node broker cluster running 0.10.1.1
> > Additionally, we're running Kafka Connect 0.10.2.0 with the ElasticSearch
> > connector by confluent [1]
> >
> > On an ISR change occurring on the brokers, all of the streams
> applications
> > and the Kafka connect ES connector threw exceptions and never recovered.
> >
> > We've seen a correlation between Kafka Broker ISR change and stream
> > applications dying.
> >
> > The logs from the streams applications throw out the following and fail
> to
> > recover:
> >
> > 07:01:23.323 stream-processor /var/log/application.log  2017-04-24
> > 06:01:23,323 - [WARN] - [1.1.0-6] - [StreamThread-1]
> > o.a.k.s.p.internals.StreamThread - Unexpected state transition from
> RUNNING
> > to NOT_RUNNING
> > 07:01:23.323 stream-processor /var/log/application.log  2017-04-24
> > 06:01:23,324 - [ERROR] - [1.1.0-6] - [StreamThread-1] Application -
> > Unexpected Exception caught in thread [StreamThread-1]:
> > org.apache.kafka.streams.errors.StreamsException: Exception caught in
> > process. taskId=0_81, processor=KSTREAM-SOURCE-00,
> > topic=kafka-topic, partition=81, offset=479285
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamTask.process(StreamTask.java:216)
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread.runLoop(
> StreamThread.java:641)
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:368)
> > Caused by: org.apache.kafka.streams.errors.StreamsException: task [0_81]
> > exception caught when producing
> > at
> > org.apache.kafka.streams.processor.internals.RecordCollectorImpl.
> checkForException(RecordCollectorImpl.java:119)
> > at
> > org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(
> RecordCollectorImpl.java:76)
> > at
> > org.apache.kafka.streams.processor.internals.SinkNode.
> process(SinkNode.java:79)
> > at
> > org.apache.kafka.streams.processor.internals.
> ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
> > at
> > org.apache.kafka.streams.kstream.internals.KStreamFlatMap$
> KStreamFlatMapProcessor.process(KStreamFlatMap.java:43)
> > at
> > org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(
> ProcessorNode.java:48)
> > at
> > org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.
> measureLatencyNs(StreamsMetricsImpl.java:188)
> > at
> > org.apache.kafka.streams.processor.internals.ProcessorNode.process(
> ProcessorNode.java:134)
> > at
> > org.apache.kafka.streams.processor.internals.
> ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
> > at
> > org.apache.kafka.streams.processor.internals.
> SourceNode.process(SourceNode.java:70)
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamTask.process(StreamTask.java:197)
> > ... 2 common frames omitted
> > Caused by: org.apache.kafka.common.errors.NotLeaderForPartitionException
> :
> > This server is not the leader for that topic-partition.
> > 07:01:23.558 stream-processor /var/log/application.log  2017-04-24
> > 06:01:23,558 - [WARN] - [1.1.0-6] - [StreamThread-3]
> > o.a.k.s.p.internals.StreamThread - Unexpected state transition from
> RUNNING
> > to NOT_RUNNING
> > 07:01:23.558 stream-processor /var/log/application.log  2017-04-24
> > 06:01:23,559 - [ERROR] - [1.1.0-6] - [StreamThread-3] Application -
> > Unexpected Exception caught in thread [StreamThread-3]:
> > org.apache.kafka.streams.errors.StreamsException: Exception caught in
> > process. taskId=0_55, processor=KSTREAM-SOURCE-00,
> > topic=kafka-topic, partition=55, offset=479308
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamTask.process(StreamTask.java:216)
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread.runLoop(
> StreamThread.java:641)
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:368)
> > Caused by: org.apache.kafka.streams.errors.StreamsException: task 

Re: Stream applications dying on broker ISR change

2017-04-24 Thread Ian Duffy
Awesome! Thank you Eno, I had a look over the release notes awhile back and
was slightly hoping that would be the answer.

Any idea how long it takes for a kafka-connect update to occur after a new
kafka-client is released/passed voting?

On 24 April 2017 at 13:25, Eno Thereska  wrote:

> Hi Ian,
>
>
> This is now fixed in 0.10.2.1. The default configuration need tweaking. If
> you can't pick that up (it's currently being voted), make sure you have
> these two parameters set as follows in your streams config:
>
> final Properties props = new Properties();
> ...
> props.put(ProducerConfig.RETRIES_CONFIG, 10);  < increase to 10 from
> default of 0
> props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG,
> Integer.toString(Integer.MAX_VALUE)); <- increase to infinity
> from default of 300 s
>
> Thanks
> Eno
>
> > On 24 Apr 2017, at 10:38, Ian Duffy  wrote:
> >
> > Hi All,
> >
> > We're running multiple Kafka Stream applications using Kafka client
> > 0.10.2.0 against a 6 node broker cluster running 0.10.1.1
> > Additionally, we're running Kafka Connect 0.10.2.0 with the ElasticSearch
> > connector by confluent [1]
> >
> > On an ISR change occurring on the brokers, all of the streams
> applications
> > and the Kafka connect ES connector threw exceptions and never recovered.
> >
> > We've seen a correlation between Kafka Broker ISR change and stream
> > applications dying.
> >
> > The logs from the streams applications throw out the following and fail
> to
> > recover:
> >
> > 07:01:23.323 stream-processor /var/log/application.log  2017-04-24
> > 06:01:23,323 - [WARN] - [1.1.0-6] - [StreamThread-1]
> > o.a.k.s.p.internals.StreamThread - Unexpected state transition from
> RUNNING
> > to NOT_RUNNING
> > 07:01:23.323 stream-processor /var/log/application.log  2017-04-24
> > 06:01:23,324 - [ERROR] - [1.1.0-6] - [StreamThread-1] Application -
> > Unexpected Exception caught in thread [StreamThread-1]:
> > org.apache.kafka.streams.errors.StreamsException: Exception caught in
> > process. taskId=0_81, processor=KSTREAM-SOURCE-00,
> > topic=kafka-topic, partition=81, offset=479285
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamTask.process(StreamTask.java:216)
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread.runLoop(
> StreamThread.java:641)
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:368)
> > Caused by: org.apache.kafka.streams.errors.StreamsException: task [0_81]
> > exception caught when producing
> > at
> > org.apache.kafka.streams.processor.internals.RecordCollectorImpl.
> checkForException(RecordCollectorImpl.java:119)
> > at
> > org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(
> RecordCollectorImpl.java:76)
> > at
> > org.apache.kafka.streams.processor.internals.SinkNode.
> process(SinkNode.java:79)
> > at
> > org.apache.kafka.streams.processor.internals.
> ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
> > at
> > org.apache.kafka.streams.kstream.internals.KStreamFlatMap$
> KStreamFlatMapProcessor.process(KStreamFlatMap.java:43)
> > at
> > org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(
> ProcessorNode.java:48)
> > at
> > org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.
> measureLatencyNs(StreamsMetricsImpl.java:188)
> > at
> > org.apache.kafka.streams.processor.internals.ProcessorNode.process(
> ProcessorNode.java:134)
> > at
> > org.apache.kafka.streams.processor.internals.
> ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
> > at
> > org.apache.kafka.streams.processor.internals.
> SourceNode.process(SourceNode.java:70)
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamTask.process(StreamTask.java:197)
> > ... 2 common frames omitted
> > Caused by: org.apache.kafka.common.errors.NotLeaderForPartitionException
> :
> > This server is not the leader for that topic-partition.
> > 07:01:23.558 stream-processor /var/log/application.log  2017-04-24
> > 06:01:23,558 - [WARN] - [1.1.0-6] - [StreamThread-3]
> > o.a.k.s.p.internals.StreamThread - Unexpected state transition from
> RUNNING
> > to NOT_RUNNING
> > 07:01:23.558 stream-processor /var/log/application.log  2017-04-24
> > 06:01:23,559 - [ERROR] - [1.1.0-6] - [StreamThread-3] Application -
> > Unexpected Exception caught in thread [StreamThread-3]:
> > org.apache.kafka.streams.errors.StreamsException: Exception caught in
> > process. taskId=0_55, processor=KSTREAM-SOURCE-00,
> > topic=kafka-topic, partition=55, offset=479308
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamTask.process(StreamTask.java:216)
> > at
> > org.apache.kafka.streams.processor.internals.StreamThread.runLoop(
> StreamThread.java:641)
> > at
> > org.apache.kafka.streams.processor.internals.
> StreamThread.run(StreamThread.java:368)
> > Caused by: org.apache.kafka.streams.errors.StreamsException: task [0_55]
> > exception caught 

Unable to consume from remote

2017-04-24 Thread Saurabh Raje
Hi Guys, I created the setup of Kafka ( kafka_2.12-0.10.2.0.tgz 
) 
as per this document https://kafka.apache.org/quickstart. The setup 
works when both producer & consumer are in localhost. The moment I 
deploy consumer to a remote machine no messages are received. The 
network connectivity is fine (telnet works). What do you think is going 
on here...?


Regards,
Saurabh Raje


message encryption on broker side

2017-04-24 Thread Zhengdao Li
Hi, I want to know is there any solutions or patches can fix this issue?


the problems of parititons assigned to consumers

2017-04-24 Thread 揣立武
Hi, all. There are two problems when we use kafka about partitions assigned
to consumers.

Problem 1: Partitions will be reassigned to consumers when consumer online
or offline, then the messages latency become higher.

Problem 2: If we have two partitions, only two consumers can consume
messages。How let more consumers to consume, but expand partitions.

Thanks!


Re: Stream applications dying on broker ISR change

2017-04-24 Thread Eno Thereska
Hi Ian,


This is now fixed in 0.10.2.1. The default configuration need tweaking. If you 
can't pick that up (it's currently being voted), make sure you have these two 
parameters set as follows in your streams config:

final Properties props = new Properties();
...
props.put(ProducerConfig.RETRIES_CONFIG, 10);  < increase to 10 from 
default of 0
props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, 
Integer.toString(Integer.MAX_VALUE)); <- increase to infinity from 
default of 300 s

Thanks
Eno

> On 24 Apr 2017, at 10:38, Ian Duffy  wrote:
> 
> Hi All,
> 
> We're running multiple Kafka Stream applications using Kafka client
> 0.10.2.0 against a 6 node broker cluster running 0.10.1.1
> Additionally, we're running Kafka Connect 0.10.2.0 with the ElasticSearch
> connector by confluent [1]
> 
> On an ISR change occurring on the brokers, all of the streams applications
> and the Kafka connect ES connector threw exceptions and never recovered.
> 
> We've seen a correlation between Kafka Broker ISR change and stream
> applications dying.
> 
> The logs from the streams applications throw out the following and fail to
> recover:
> 
> 07:01:23.323 stream-processor /var/log/application.log  2017-04-24
> 06:01:23,323 - [WARN] - [1.1.0-6] - [StreamThread-1]
> o.a.k.s.p.internals.StreamThread - Unexpected state transition from RUNNING
> to NOT_RUNNING
> 07:01:23.323 stream-processor /var/log/application.log  2017-04-24
> 06:01:23,324 - [ERROR] - [1.1.0-6] - [StreamThread-1] Application -
> Unexpected Exception caught in thread [StreamThread-1]:
> org.apache.kafka.streams.errors.StreamsException: Exception caught in
> process. taskId=0_81, processor=KSTREAM-SOURCE-00,
> topic=kafka-topic, partition=81, offset=479285
> at
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:216)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:641)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)
> Caused by: org.apache.kafka.streams.errors.StreamsException: task [0_81]
> exception caught when producing
> at
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.checkForException(RecordCollectorImpl.java:119)
> at
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:76)
> at
> org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:79)
> at
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
> at
> org.apache.kafka.streams.kstream.internals.KStreamFlatMap$KStreamFlatMapProcessor.process(KStreamFlatMap.java:43)
> at
> org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:48)
> at
> org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
> at
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:134)
> at
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
> at
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:70)
> at
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:197)
> ... 2 common frames omitted
> Caused by: org.apache.kafka.common.errors.NotLeaderForPartitionException:
> This server is not the leader for that topic-partition.
> 07:01:23.558 stream-processor /var/log/application.log  2017-04-24
> 06:01:23,558 - [WARN] - [1.1.0-6] - [StreamThread-3]
> o.a.k.s.p.internals.StreamThread - Unexpected state transition from RUNNING
> to NOT_RUNNING
> 07:01:23.558 stream-processor /var/log/application.log  2017-04-24
> 06:01:23,559 - [ERROR] - [1.1.0-6] - [StreamThread-3] Application -
> Unexpected Exception caught in thread [StreamThread-3]:
> org.apache.kafka.streams.errors.StreamsException: Exception caught in
> process. taskId=0_55, processor=KSTREAM-SOURCE-00,
> topic=kafka-topic, partition=55, offset=479308
> at
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:216)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:641)
> at
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)
> Caused by: org.apache.kafka.streams.errors.StreamsException: task [0_55]
> exception caught when producing
> at
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.checkForException(RecordCollectorImpl.java:119)
> at
> org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:76)
> at
> org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:79)
> at
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
> at
> 

Yet another CLI to manage Kafka Connect/SchemaRegistry.

2017-04-24 Thread Florian Hussonnois
Hi folks,

I would like to share with you a CLI that I have developed for a project
needs - https://github.com/fhussonnois/kafkacli

We use it to executes common tasks on Kafka Connect and the SchemeRegistry
during the developments.

-- 
Florian HUSSONNOIS


Stream applications dying on broker ISR change

2017-04-24 Thread Ian Duffy
Hi All,

We're running multiple Kafka Stream applications using Kafka client
0.10.2.0 against a 6 node broker cluster running 0.10.1.1
Additionally, we're running Kafka Connect 0.10.2.0 with the ElasticSearch
connector by confluent [1]

On an ISR change occurring on the brokers, all of the streams applications
and the Kafka connect ES connector threw exceptions and never recovered.

We've seen a correlation between Kafka Broker ISR change and stream
applications dying.

The logs from the streams applications throw out the following and fail to
recover:

07:01:23.323 stream-processor /var/log/application.log  2017-04-24
06:01:23,323 - [WARN] - [1.1.0-6] - [StreamThread-1]
o.a.k.s.p.internals.StreamThread - Unexpected state transition from RUNNING
to NOT_RUNNING
07:01:23.323 stream-processor /var/log/application.log  2017-04-24
06:01:23,324 - [ERROR] - [1.1.0-6] - [StreamThread-1] Application -
Unexpected Exception caught in thread [StreamThread-1]:
org.apache.kafka.streams.errors.StreamsException: Exception caught in
process. taskId=0_81, processor=KSTREAM-SOURCE-00,
topic=kafka-topic, partition=81, offset=479285
at
org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:216)
at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:641)
at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)
Caused by: org.apache.kafka.streams.errors.StreamsException: task [0_81]
exception caught when producing
at
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.checkForException(RecordCollectorImpl.java:119)
at
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:76)
at
org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:79)
at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
at
org.apache.kafka.streams.kstream.internals.KStreamFlatMap$KStreamFlatMapProcessor.process(KStreamFlatMap.java:43)
at
org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:48)
at
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
at
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:134)
at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
at
org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:70)
at
org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:197)
... 2 common frames omitted
Caused by: org.apache.kafka.common.errors.NotLeaderForPartitionException:
This server is not the leader for that topic-partition.
07:01:23.558 stream-processor /var/log/application.log  2017-04-24
06:01:23,558 - [WARN] - [1.1.0-6] - [StreamThread-3]
o.a.k.s.p.internals.StreamThread - Unexpected state transition from RUNNING
to NOT_RUNNING
07:01:23.558 stream-processor /var/log/application.log  2017-04-24
06:01:23,559 - [ERROR] - [1.1.0-6] - [StreamThread-3] Application -
Unexpected Exception caught in thread [StreamThread-3]:
org.apache.kafka.streams.errors.StreamsException: Exception caught in
process. taskId=0_55, processor=KSTREAM-SOURCE-00,
topic=kafka-topic, partition=55, offset=479308
at
org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:216)
at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:641)
at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:368)
Caused by: org.apache.kafka.streams.errors.StreamsException: task [0_55]
exception caught when producing
at
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.checkForException(RecordCollectorImpl.java:119)
at
org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:76)
at
org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:79)
at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
at
org.apache.kafka.streams.kstream.internals.KStreamFlatMap$KStreamFlatMapProcessor.process(KStreamFlatMap.java:43)
at
org.apache.kafka.streams.processor.internals.ProcessorNode$1.run(ProcessorNode.java:48)
at
org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:188)
at
org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:134)
at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:83)
at
org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:70)
at
org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:197)
... 2 common frames omitted
Caused by: org.apache.kafka.common.errors.NotLeaderForPartitionException:
This server is not the leader for that topic-partition.

Are we 

Re: Recording - Storm & Kafka Meetup on April 20th 2017

2017-04-24 Thread Xin Wang
How about publishing this to Storm site?

 - Xin

2017-04-22 19:27 GMT+08:00 steve tueno :

> great
>
> Thanks
>
>
>
> Cordialement,
>
> TUENO FOTSO Steve Jeffrey
> Ingénieur de conception
> Génie Informatique
> +237 676 57 17 28 <+237%206%2076%2057%2017%2028>
> +237 697 86 36 38 <+237%206%2097%2086%2036%2038>
>
> +33 6 23 71 91 52 <+33%206%2023%2071%2091%2052>
>
>
> https://jobs.jumia.cm/fr/candidats/CVTF1486563.html
> 
> __
>
> https://play.google.com/store/apps/details?id=com.polytech.remotecomputer
> https://play.google.com/store/apps/details?id=com.polytech.
> internetaccesschecker
> *http://www.traveler.cm/ *
> http://remotecomputer.traveler.cm/
> https://play.google.com/store/apps/details?id=com.polytech.
> androidsmssender
> https://github.com/stuenofotso/notre-jargon
> https://play.google.com/store/apps/details?id=com.polytech.welovecameroon
> https://play.google.com/store/apps/details?id=com.polytech.welovefrance
>
>
>
> 2017-04-22 3:08 GMT+02:00 Roshan Naik :
>
>> It was a great meetup and for the benefit of those interested but unable
>> to attend it, here is a link to the recording :
>>
>>
>>
>> https://www.youtube.com/watch?v=kCRv6iEd7Ow
>>
>>
>>
>> List of Talks:
>>
>> -  *Introduction* –   Suresh Srinivas (Hortonworks)
>>
>> -  [4m:31sec] –  *Overview of  Storm 1.1* -  Hugo Louro
>> (Hortonworks)
>>
>> -  [20m] –  *Rethinking the Storm 2.0 Worker*  - Roshan Naik
>> (Hortonworks)
>>
>> -  [57m] –  *Storm in Retail Context: Catalog data processing
>> using Kafka, Storm & Microservices*   -   Karthik Deivasigamani (WalMart
>> Labs)
>>
>> -  [1h: 54m:45sec] *–   Schema Registry &  Streaming Analytics
>> Manager (aka StreamLine)   *-   Sriharsha Chintalapani (Hortonworks)
>>
>>
>>
>>
>>
>
>


How does replication affect kafka quota?

2017-04-24 Thread Archie
So by specifying a kafka quota for a user as 50 MBps, I can make sure it
can write on a partition at broker X (imagine this user has only 1
partition at this broker) at a max rate of 50 MBps. Now if the partition
has a replica on another broker Y, will the user still be able to write
data at rate 50MBps? Or will replicas slow down the user's write rate?


Thanks,
Archie