Re: Exceptions when programmatically start multiple kafka brokers

2015-12-22 Thread Guozhang Wang
Siyuan,

Does both of them have broker id = 0?

Guozhang

On Mon, Dec 21, 2015 at 3:30 PM, hsy...@gmail.com  wrote:

> I'm trying to start 2 brokers in my kafka ingestion unit test and I got
> exception
>
> javax.management.InstanceAlreadyExistsException:
> kafka.server:type=app-info,id=0
> at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
> at
>
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
> at
>
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)
> at
>
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)
> at
>
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)
> at
>
> com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
> at
>
> org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:57)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:239)
> at
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
> at
>
> org.apache.apex.malhar.kafka.KafkaOperatorTestBase.startKafkaServer(KafkaOperatorTestBase.java:133)
> at
>
> org.apache.apex.malhar.kafka.KafkaOperatorTestBase.startKafkaServer(KafkaOperatorTestBase.java:143)
> at
>
> org.apache.apex.malhar.kafka.KafkaOperatorTestBase.beforeTest(KafkaOperatorTestBase.java:175)
>
> It is caused by JMXMetrcsReporter?
> It doesn't affect any function we want, but it is annoying.
> How to disable it?
>
> Thanks!
>



-- 
-- Guozhang


Re: Kafka 0.9.0 New Java Consumer API fetching duplicate records

2015-12-22 Thread Jason Gustafson
Hey Pradeep,

Can you include the output from one of the ConsumerDemo runs?

-Jason

On Mon, Dec 21, 2015 at 9:47 PM, pradeep kumar 
wrote:

> Can someone please help me on this.
>
> http://stackoverflow.com/questions/34405124/kafka-0-9-0-new-java-consumer-api-fetching-duplicate-records
>
> Thanks,
> Pradeep
>


Rebalancing in Kafka 0.8.2.2

2015-12-22 Thread Deepti Jindal -X (djindal - ARICENT TECHNOLOGIES MAURIITIUS LIMITED at Cisco)
Hi,

I am using kafka 0.8.2.2 for developing an application for my company.
We will be using High Level Consumer API for the same. There is one point which 
I want to confirm in this regard, that In Kafka 0.8.2.2 there is no callback or 
any mechanism available which will tell the consumer that a rebalance has 
occurred at Kafka.
I am saying in apropos of ConsumerRebalance Interface which is available in 
0.9.0.0 version. Is something similar available in 0.8.2.2 version as well?

Also I can see kafkaConsumer class is available in Kafka-Client-0.8.2.2 jar? 
Does that mean there are two ways to write a consumer in 0.8.2.2 version - One 
using High Level Consumer API and the other one using new KafkaConsumer class?

Any help will be highly appreciated.
Many thanks in advance.

Regards,
Deepti Jindal


Re: 0.9 consumer beta?

2015-12-22 Thread Guozhang Wang
Allen,

By "beta quality" we meant to say that it is not used in production yet as
we know of, and its public APIs may have minor changes in future 0.9.0.1
release. But if you need any new features of the 0.9.0.0 release like
security I would recommend you to start trying it out.

Thanks,
Guozhang

On Tue, Dec 22, 2015 at 9:53 AM, allen chan 
wrote:

> In the documentation it says the new consumer is considered beta quality. I
> cannot find what is beta about it. Stability? Performance?
> Can someone clarify?
> 3.3.2 New Consumer Configs
> Since
> 0.9.0.0 we have been working on a replacement for our existing simple and
> high-level consumers. The code is considered beta quality"
>
> --
> Allen Michael Chan
>



-- 
-- Guozhang


Re: Consumer group disappears and consumers loops

2015-12-22 Thread Rune Sørensen
Hi,

Sorry for the long delay in replying.

As for your questions:
No we are not using SSL.
The problem went away for Martin when he was running against his kafka
instance locally on his development machine, but we are still seeing the
issue when we run it in our testing environment, where the broker is on a
remote machine.
The network in our testing environment seems stable, from the measurements
I have made so far.

*Rune Tor*
Sørensen
+45 3172 2097 <+4531722097>
LinkedIn  Twitter

*Copenhagen*
Falcon Social
H.C. Andersens Blvd. 27
1553 Copenhagen
*Budapest*
Falcon Social
Colabs Startup Center Zrt
1016 Budapest, Krisztina krt. 99
[image: Falcon Social]

Social Media Management for Enterprise

On Tue, Dec 1, 2015 at 11:56 PM, Jason Gustafson  wrote:

> I've been unable to reproduce this issue running locally. Even with a poll
> timeout of 1 millisecond, it seems to work as expected. It would be helpful
> to know a little more about your setup. Are you using SSL? Are the brokers
> remote? Is the network stable?
>
> Thanks,
> Jason
>
> On Tue, Dec 1, 2015 at 10:06 AM, Jason Gustafson 
> wrote:
>
> > Hi Martin,
> >
> > I'm also not sure why the poll timeout would affect this. Perhaps the
> > handler is still doing work (e.g. sending requests) when the record set
> is
> > empty?
> >
> > As a general rule, I would recommend longer poll timeouts. I've actually
> > tended to use Long.MAX_VALUE myself. I'll have a look just to make sure
> > everything still works with smaller values though.
> >
> > -Jason
> >
> >
> >
> > On Tue, Dec 1, 2015 at 2:35 AM, Martin Skøtt <
> > martin.sko...@falconsocial.com> wrote:
> >
> >> Hi Jason,
> >>
> >> That actually sounds like a very plausible explanation. My current
> >> consumer
> >> is using the default settings, but I have previously used these (taken
> >> from
> >> the sample in the Javadoc for the new KafkaConsumer):
> >>  "auto.commit.interval.ms", "1000"
> >>  "session.timeout.ms", "3"
> >>
> >> My consumer loop is quite simple as it just calls a domain specific
> >> service:
> >>
> >> while (true) {
> >> ConsumerRecords records = consumer.poll(1);
> >> for (ConsumerRecord record : records) {
> >> serve.handle(record.topic(), record.value());
> >> }
> >> }
> >>
> >> The domain service does a number of things (including lookups in a RDBMS
> >> and saving to ElasticSearch). In my local test setup a poll will often
> >> result between 5.000 and 10.000 records and I can easily see the
> >> processing
> >> of those taking more than 30 seconds.
> >>
> >> I'll probably take a look at adding some threading to my consumer and
> add
> >> more partitions to my topics.
> >>
> >> That is all fine, but it doesn't really explain why increasing poll
> >> timeout
> >> made the problem go away :-/
> >>
> >> Martin
> >>
> >> On 30 November 2015 at 19:30, Jason Gustafson 
> wrote:
> >>
> >> > Hey Martin,
> >> >
> >> > At a glance, it looks like your consumer's session timeout is
> expiring.
> >> > This shouldn't happen unless there is a delay between successive calls
> >> to
> >> > poll which is longer than the session timeout. It might help if you
> >> include
> >> > a snippet of your poll loop and your configuration (i.e. any
> overridden
> >> > settings).
> >> >
> >> > -Jason
> >> >
> >> > On Mon, Nov 30, 2015 at 8:12 AM, Martin Skøtt <
> >> > martin.sko...@falconsocial.com> wrote:
> >> >
> >> > > Well, I made the problem go away, but I'm not sure why it works :-/
> >> > >
> >> > > Previously I used a time out value of 100 for Consumer.poll().
> >> Increasing
> >> > > it to 10.000 makes the problem go away completely?! I tried other
> >> values
> >> > as
> >> > > well:
> >> > >- 0 problem remained
> >> > >- 3000, same as heartbeat.interval, problem remained, but less
> >> > frequent
> >> > >
> >> > > Not really sure what is going on, but happy that the problem went
> away
> >> > :-)
> >> > >
> >> > > Martin
> >> > >
> >> > > On 30 November 2015 at 15:33, Martin Skøtt <
> >> > martin.sko...@falconsocial.com
> >> > > >
> >> > > wrote:
> >> > >
> >> > > > Hi Guozhang,
> >> > > >
> >> > > > I have done some testing with various values of
> >> heartbeat.interval.ms
> >> > > and
> >> > > > they don't seem to have any influence on the error messages.
> Running
> >> > > > kafka-consumer-groups also continues to return that the consumer
> >> groups
> >> > > > does not exists or is rebalancing. Do you have any suggestions to
> >> how I
> >> > > > could debug this further?
> >> > > >
> >> > > > Regards,
> >> > > > Martin
> >> > > >
> >> > > >
> >> > > > On 25 November 2015 at 18:37, Guozhang Wang 
> >> > wrote:
> >> > > >
> >> > > >> Hello Martin,
> >> > > >>
> >> > > >> 

Re: Measuring Kafka Producer request latency when it is less than 1ms

2015-12-22 Thread Helleren, Erik
For some high performance environments, I would like to see microsecond or
nanosecond precision on metrics whenever possible.  Even better would be
some sort of histogram of individual events so we could see the
variability.

On 12/21/15, 9:27 PM, "Alexey Pirogov"  wrote:

>Ismael, thanks for reply.
>
>Jire created https://issues.apache.org/jira/browse/KAFKA-3028.
>
>Thank you,
>Alexey



NOTICE: This message, and any attachments, are for the intended recipient(s) 
only, may contain information that is privileged, confidential and/or 
proprietary and subject to important terms and conditions available at 
E-Communication 
Disclaimer.
 If you are not the intended recipient, please delete this message. CME Group 
and its subsidiaries reserve the right to monitor all email communications that 
occur on CME Group information systems.


Re: Kafka 0.9.0 New Java Consumer API fetching duplicate records

2015-12-22 Thread Jason Gustafson
I took your demo code and ran it locally. So far I haven't seen any
duplicates. In addition to the output showing duplicates, it might be
helpful to include your producer code.

Thanks,
Jason

On Tue, Dec 22, 2015 at 11:02 AM, Jason Gustafson 
wrote:

> Hey Pradeep,
>
> Can you include the output from one of the ConsumerDemo runs?
>
> -Jason
>
> On Mon, Dec 21, 2015 at 9:47 PM, pradeep kumar 
> wrote:
>
>> Can someone please help me on this.
>>
>> http://stackoverflow.com/questions/34405124/kafka-0-9-0-new-java-consumer-api-fetching-duplicate-records
>>
>> Thanks,
>> Pradeep
>>
>
>