first message is lost

2014-11-12 Thread Yonghui Zhao
Hi,

For a non-existent topic,  the consumer and producer are set up.
Then if the producer sends the first message, producer gets this exception:

[2014-11-12 16:24:28,041] WARN Error while fetching metadata
[{TopicMetadata for topic test5 ->
No partition metadata for topic test5 due to
kafka.common.LeaderNotAvailableException}] for topic [test5]: class
kafka.common.LeaderNotAvailableException
 (kafka.producer.BrokerPartitionInfo)
[2014-11-12 16:24:28,061] WARN Error while fetching metadata
[{TopicMetadata for topic test5 ->
No partition metadata for topic test5 due to
kafka.common.LeaderNotAvailableException}] for topic [test5]: class
kafka.common.LeaderNotAvailableException
 (kafka.producer.BrokerPartitionInfo)
[2014-11-12 16:24:28,062] ERROR Failed to collate messages by topic,
partition due to: Failed to fetch topic metadata for topic: test5
(kafka.producer.async.DefaultEventHandler)


And the consumer won't get this message.  But the second message and later
is ok.

We don't change server.properties so auto.create.topics.enable should be
true.

But if we set up consumer with --from-begin flag,  such like
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test5
* --from-begin*
the exception is still there but consumer will get the first message.

My question is how to avoid the loss.

Thanks!


Re: Security in 0.8.2 beta

2014-11-12 Thread Kashyap Mhaisekar
Thanks. We use the encryption approach as well. But the 2 topic approach is
unique. Thank you.

Kashyap.
On Nov 12, 2014 1:54 AM, "Joe Stein"  wrote:

> I know a few implements that do this "encrypt your messages with a PSK
> between producers and consumers". One of them actually writes the
> "encrypted " on a different topic foreach downstream
> consumer private key that can read the message. This way when you are
> consuming you consume from two topics 1) the topic with the message (which
> is encrypted) you want 2) the topic that you can use your private key
> to decrypt (because your public key was used) the symmetric key and then
> use that to decrypt the message (which you join from the two streams by the
> uuid so each message has a different secrete key encrypted with your public
> key) The other ones I can't talk about =8^) but this one I mention is
> interesting solution to this problem with Kafka I really like.
>
> /***
>  Joe Stein
>  Founder, Principal Consultant
>  Big Data Open Source Security LLC
>  http://www.stealth.ly
>  Twitter: @allthingshadoop 
> /
>
> On Wed, Nov 12, 2014 at 2:41 AM, Mathias Herberts <
> mathias.herbe...@gmail.com> wrote:
>
> > Simply encrypt your messages with a PSK between producers and consumers.
> > On Nov 12, 2014 4:38 AM, "Kashyap Mhaisekar" 
> wrote:
> >
> > > Hi,
> > > Is there a way to secure the topics created in Kafka 0.8.2 beta? The
> need
> > > is to ensure no one is asked to read data from the topic without
> > > authorization.
> > >
> > > Regards
> > > Kashyap
> > >
> >
>


Getting Simple consumer details using MBean

2014-11-12 Thread Madhukar Bharti
Hi,

I want to get the simple consumer details using MBean as described here
.
But these bean names are not showing in JConsole as well as while trying to
read from JMX.

Please help me to get simple consumer details.

I am using Kafka 0.8.1.1 version.


Thanks and Regards,
Madhukar Bharti


0.8.2 producer with 0.8.1.1 cluster?

2014-11-12 Thread Shlomi Hazan
Hi,
Is the new producer 0.8.2 supposed to work with 0.8.1.1 cluster?
Shlomi


Re: first message is lost

2014-11-12 Thread Guozhang Wang
Yonghui,

If consumer is not set with --from-beginning, then this scenario is
expected: KAFKA-1006 

We are still figuring what is the best way to resolve this issue.

Guozhang

On Wed, Nov 12, 2014 at 12:35 AM, Yonghui Zhao 
wrote:

> Hi,
>
> For a non-existent topic,  the consumer and producer are set up.
> Then if the producer sends the first message, producer gets this exception:
>
> [2014-11-12 16:24:28,041] WARN Error while fetching metadata
> [{TopicMetadata for topic test5 ->
> No partition metadata for topic test5 due to
> kafka.common.LeaderNotAvailableException}] for topic [test5]: class
> kafka.common.LeaderNotAvailableException
>  (kafka.producer.BrokerPartitionInfo)
> [2014-11-12 16:24:28,061] WARN Error while fetching metadata
> [{TopicMetadata for topic test5 ->
> No partition metadata for topic test5 due to
> kafka.common.LeaderNotAvailableException}] for topic [test5]: class
> kafka.common.LeaderNotAvailableException
>  (kafka.producer.BrokerPartitionInfo)
> [2014-11-12 16:24:28,062] ERROR Failed to collate messages by topic,
> partition due to: Failed to fetch topic metadata for topic: test5
> (kafka.producer.async.DefaultEventHandler)
>
>
> And the consumer won't get this message.  But the second message and later
> is ok.
>
> We don't change server.properties so auto.create.topics.enable should be
> true.
>
> But if we set up consumer with --from-begin flag,  such like
> bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test5
> * --from-begin*
> the exception is still there but consumer will get the first message.
>
> My question is how to avoid the loss.
>
> Thanks!
>



-- 
-- Guozhang


Order of consumed messages in the same partition.

2014-11-12 Thread Filippo De Luca
Hi all,
I would like to know if there is any possibility to consume one message for
a given topic and partition, using the high-level consumer, that has the
offset lower than the last consumed offset from the same topic and
partition couple.

It seems that I am experiencing this problem in mine application, and
wondering if it a possibility, or is a bug in our codebase.

Thanks for any helps.
-- 
*Filippo De Luca*
-
WWW: http://filippodeluca.com
IM:  filosgang...@gmail.com


Re: Programmatic Kafka version detection/extraction?

2014-11-12 Thread Guozhang Wang
Sounds great, +1 on this.

On Tue, Nov 11, 2014 at 1:36 PM, Gwen Shapira  wrote:

> So it looks like we can use Gradle to add properties to manifest file and
> then use getResourceAsStream to read the file and parse it.
>
> The Gradle part would be something like:
> jar.manifest {
> attributes('Implementation-Title': project.name,
> 'Implementation-Version': project.version,
> 'Built-By': System.getProperty('user.name'),
> 'Built-JDK': System.getProperty('java.version'),
> 'Built-Host': getHostname(),
> 'Source-Compatibility': project.sourceCompatibility,
> 'Target-Compatibility': project.targetCompatibility
> )
> }
>
> The code part would be:
>
> this.getClass().getClassLoader().getResourceAsStream("/META-INF/MANIFEST.MF")
>
> Does that look like the right approach?
>
> Gwen
>
> On Tue, Nov 11, 2014 at 10:43 AM, Bhavesh Mistry <
> mistry.p.bhav...@gmail.com
> > wrote:
>
> > If is maven artifact then you will get following pre-build property file
> > from maven build called pom.properties under
> > /META-INF/maven/groupid/artifactId/pom.properties folder.
> >
> > Here is sample:
> > #Generated by Maven
> > #Mon Oct 10 10:44:31 EDT 2011
> > version=10.0.1
> > groupId=com.google.guava
> > artifactId=guava
> >
> > Thanks,
> >
> > Bhavesh
> >
> > On Tue, Nov 11, 2014 at 10:34 AM, Gwen Shapira 
> > wrote:
> >
> > > In Sqoop we do the following:
> > >
> > > Maven runs a shell script, passing the version as a parameter.
> > > The shell-script generates a small java class, which is then built
> with a
> > > Maven plugin.
> > > Our code references this generated class when we expose "getVersion()".
> > >
> > > Its complex and ugly, so I'm kind of hoping that there's a better way
> to
> > do
> > > it :)
> > >
> > > Gwen
> > >
> > > On Tue, Nov 11, 2014 at 9:42 AM, Jun Rao  wrote:
> > >
> > > > Currently, the version number is only stored in our build config
> file,
> > > > gradle.properties. Not sure how we can automatically extract it and
> > > expose
> > > > it in an mbean. How do other projects do this?
> > > >
> > > > Thanks,
> > > >
> > > > Jun
> > > >
> > > > On Tue, Nov 11, 2014 at 7:05 AM, Otis Gospodnetic <
> > > > otis.gospodne...@gmail.com> wrote:
> > > >
> > > > > Hi Jun,
> > > > >
> > > > > Sounds good.  But is the version number stored anywhere from where
> it
> > > > could
> > > > > be gotten?
> > > > >
> > > > > Thanks,
> > > > > Otis
> > > > > --
> > > > > Monitoring * Alerting * Anomaly Detection * Centralized Log
> > Management
> > > > > Solr & Elasticsearch Support * http://sematext.com/
> > > > >
> > > > >
> > > > > On Tue, Nov 11, 2014 at 12:45 AM, Jun Rao 
> wrote:
> > > > >
> > > > > > Otis,
> > > > > >
> > > > > > We don't have an api for that now. We can probably expose this
> as a
> > > JMX
> > > > > as
> > > > > > part of kafka-1481.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Jun
> > > > > >
> > > > > > On Mon, Nov 10, 2014 at 7:17 PM, Otis Gospodnetic <
> > > > > > otis.gospodne...@gmail.com> wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > Is there a way to detect which version of Kafka one is running?
> > > > > > > Is there an API for that, or a constant with this value, or
> maybe
> > > an
> > > > > > MBean
> > > > > > > or some other way to get to this info?
> > > > > > >
> > > > > > > Thanks,
> > > > > > > Otis
> > > > > > > --
> > > > > > > Monitoring * Alerting * Anomaly Detection * Centralized Log
> > > > Management
> > > > > > > Solr & Elasticsearch Support * http://sematext.com/
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>



-- 
-- Guozhang


Re: Order of consumed messages in the same partition.

2014-11-12 Thread Guozhang Wang
Hi Filippo,

By saying "offset" do you mean the offset field inside the message or you
keep track of the ordering from the producer end and found it not
consistent as observed from the consumer end?

Guozhang

On Wed, Nov 12, 2014 at 8:39 AM, Filippo De Luca 
wrote:

> Hi all,
> I would like to know if there is any possibility to consume one message for
> a given topic and partition, using the high-level consumer, that has the
> offset lower than the last consumed offset from the same topic and
> partition couple.
>
> It seems that I am experiencing this problem in mine application, and
> wondering if it a possibility, or is a bug in our codebase.
>
> Thanks for any helps.
> --
> *Filippo De Luca*
> -
> WWW: http://filippodeluca.com
> IM:  filosgang...@gmail.com
>



-- 
-- Guozhang


Re: 0.8.2 producer with 0.8.1.1 cluster?

2014-11-12 Thread Guozhang Wang
Shlomi,

It should be compatible, did you see any issues using it against a 0.8.1.1
cluster?

Guozhang

On Wed, Nov 12, 2014 at 5:43 AM, Shlomi Hazan  wrote:

> Hi,
> Is the new producer 0.8.2 supposed to work with 0.8.1.1 cluster?
> Shlomi
>



-- 
-- Guozhang


Re: Programmatic Kafka version detection/extraction?

2014-11-12 Thread Mark Roberts
Just to be clear: this is going to be exposed via some Api the clients can call 
at startup?


> On Nov 12, 2014, at 08:59, Guozhang Wang  wrote:
> 
> Sounds great, +1 on this.
> 
>> On Tue, Nov 11, 2014 at 1:36 PM, Gwen Shapira  wrote:
>> 
>> So it looks like we can use Gradle to add properties to manifest file and
>> then use getResourceAsStream to read the file and parse it.
>> 
>> The Gradle part would be something like:
>> jar.manifest {
>>attributes('Implementation-Title': project.name,
>>'Implementation-Version': project.version,
>>'Built-By': System.getProperty('user.name'),
>>'Built-JDK': System.getProperty('java.version'),
>>'Built-Host': getHostname(),
>>'Source-Compatibility': project.sourceCompatibility,
>>'Target-Compatibility': project.targetCompatibility
>>)
>>}
>> 
>> The code part would be:
>> 
>> this.getClass().getClassLoader().getResourceAsStream("/META-INF/MANIFEST.MF")
>> 
>> Does that look like the right approach?
>> 
>> Gwen
>> 
>> On Tue, Nov 11, 2014 at 10:43 AM, Bhavesh Mistry <
>> mistry.p.bhav...@gmail.com
>>> wrote:
>> 
>>> If is maven artifact then you will get following pre-build property file
>>> from maven build called pom.properties under
>>> /META-INF/maven/groupid/artifactId/pom.properties folder.
>>> 
>>> Here is sample:
>>> #Generated by Maven
>>> #Mon Oct 10 10:44:31 EDT 2011
>>> version=10.0.1
>>> groupId=com.google.guava
>>> artifactId=guava
>>> 
>>> Thanks,
>>> 
>>> Bhavesh
>>> 
>>> On Tue, Nov 11, 2014 at 10:34 AM, Gwen Shapira 
>>> wrote:
>>> 
 In Sqoop we do the following:
 
 Maven runs a shell script, passing the version as a parameter.
 The shell-script generates a small java class, which is then built
>> with a
 Maven plugin.
 Our code references this generated class when we expose "getVersion()".
 
 Its complex and ugly, so I'm kind of hoping that there's a better way
>> to
>>> do
 it :)
 
 Gwen
 
> On Tue, Nov 11, 2014 at 9:42 AM, Jun Rao  wrote:
> 
> Currently, the version number is only stored in our build config
>> file,
> gradle.properties. Not sure how we can automatically extract it and
 expose
> it in an mbean. How do other projects do this?
> 
> Thanks,
> 
> Jun
> 
> On Tue, Nov 11, 2014 at 7:05 AM, Otis Gospodnetic <
> otis.gospodne...@gmail.com> wrote:
> 
>> Hi Jun,
>> 
>> Sounds good.  But is the version number stored anywhere from where
>> it
> could
>> be gotten?
>> 
>> Thanks,
>> Otis
>> --
>> Monitoring * Alerting * Anomaly Detection * Centralized Log
>>> Management
>> Solr & Elasticsearch Support * http://sematext.com/
>> 
>> 
>> On Tue, Nov 11, 2014 at 12:45 AM, Jun Rao 
>> wrote:
>> 
>>> Otis,
>>> 
>>> We don't have an api for that now. We can probably expose this
>> as a
 JMX
>> as
>>> part of kafka-1481.
>>> 
>>> Thanks,
>>> 
>>> Jun
>>> 
>>> On Mon, Nov 10, 2014 at 7:17 PM, Otis Gospodnetic <
>>> otis.gospodne...@gmail.com> wrote:
>>> 
 Hi,
 
 Is there a way to detect which version of Kafka one is running?
 Is there an API for that, or a constant with this value, or
>> maybe
 an
>>> MBean
 or some other way to get to this info?
 
 Thanks,
 Otis
 --
 Monitoring * Alerting * Anomaly Detection * Centralized Log
> Management
 Solr & Elasticsearch Support * http://sematext.com/
> 
> 
> 
> -- 
> -- Guozhang


Re: spikes in producer requests/sec

2014-11-12 Thread Jun Rao
You can also look into the producer time metrics and see if the time goes
up before the spike of the request rate. If the time does go up, you can
look at the breakdown of the time. Any latency due to local I/O will be
included in the local time.

Thanks,

Jun

On Tue, Nov 11, 2014 at 10:50 AM, Wes Chow  wrote:

>
> We're seeing periodic spikes in req/sec rates across our nodes. Our
> cluster is 10 nodes, and the topic has a replication factor of 3. We push
> around 200k messages / sec into Kafka.
>
>
> The machines are running the most recent version of Kafka and we're
> connecting via librdkafka. pingstream02-10 are using the CMS garbage
> collector, but I switched pingstream01 to use G1GC under the theory that
> maybe these were GC pauses. The graph shows that likely didn't improve the
> situation.
>
> My next thought is that maybe this is the effect of log rolling. Checking
> in the logs, I see a lot of this:
>
> [2014-11-11 13:46:45,836] 72952071 [ReplicaFetcherThread-0-7] INFO
> kafka.log.Log  - Rolled new log segment for 'pings-342' in 3 ms.
> [2014-11-11 13:46:47,116] 72953351 [kafka-request-handler-0] INFO
> kafka.log.Log  - Rolled new log segment for 'pings-186' in 2 ms.
> [2014-11-11 13:46:48,155] 72954390 [ReplicaFetcherThread-0-8] INFO
> kafka.log.Log  - Rolled new log segment for 'pings-253' in 3 ms.
> [2014-11-11 13:46:48,408] 72954643 [ReplicaFetcherThread-0-4] INFO
> kafka.log.Log  - Rolled new log segment for 'pings-209' in 3 ms.
> [2014-11-11 13:46:48,436] 72954671 [ReplicaFetcherThread-0-4] INFO
> kafka.log.Log  - Rolled new log segment for 'pings-299' in 2 ms.
> [2014-11-11 13:46:48,687] 72954922 [kafka-request-handler-0] INFO
> kafka.log.Log  - Rolled new log segment for 'pings-506' in 2 ms.
>
> The "pings" topic in question has 512 partitions, so it does this 512
> times every so often. We have an effective retention period of a bit less
> than 30 min, so rolling happens pretty frequently. Still, if I assume worst
> case that rolling locks up the process for 2ms and there are 512 rolls
> every few minutes, I'd expect halting to happen for about a second at a
> time. The graphs seem to indicate much longer dips, but it's hard for me to
> know if I'm looking at real data or some sort of artifact.
>
> Fwiw, the producers are not reporting any errors, so it does not seem like
> we're losing data.
>
> I'm new to Kafka. Should I be worried? If so, how should I be debugging
> this?
>
> Thanks,
> Wes
>
>


Re: Programmatic Kafka version detection/extraction?

2014-11-12 Thread Gwen Shapira
Good question.

The server will need to expose this in the protocol, so Kafka clients will
know what they are talking to.

We may also want to expose this in the producer and consumer, so people who
use Kafka's built-in clients will know which version they have in the
environment.



On Wed, Nov 12, 2014 at 9:09 AM, Mark Roberts  wrote:

> Just to be clear: this is going to be exposed via some Api the clients can
> call at startup?
>
>
> > On Nov 12, 2014, at 08:59, Guozhang Wang  wrote:
> >
> > Sounds great, +1 on this.
> >
> >> On Tue, Nov 11, 2014 at 1:36 PM, Gwen Shapira 
> wrote:
> >>
> >> So it looks like we can use Gradle to add properties to manifest file
> and
> >> then use getResourceAsStream to read the file and parse it.
> >>
> >> The Gradle part would be something like:
> >> jar.manifest {
> >>attributes('Implementation-Title': project.name,
> >>'Implementation-Version': project.version,
> >>'Built-By': System.getProperty('user.name'),
> >>'Built-JDK': System.getProperty('java.version'),
> >>'Built-Host': getHostname(),
> >>'Source-Compatibility': project.sourceCompatibility,
> >>'Target-Compatibility': project.targetCompatibility
> >>)
> >>}
> >>
> >> The code part would be:
> >>
> >>
> this.getClass().getClassLoader().getResourceAsStream("/META-INF/MANIFEST.MF")
> >>
> >> Does that look like the right approach?
> >>
> >> Gwen
> >>
> >> On Tue, Nov 11, 2014 at 10:43 AM, Bhavesh Mistry <
> >> mistry.p.bhav...@gmail.com
> >>> wrote:
> >>
> >>> If is maven artifact then you will get following pre-build property
> file
> >>> from maven build called pom.properties under
> >>> /META-INF/maven/groupid/artifactId/pom.properties folder.
> >>>
> >>> Here is sample:
> >>> #Generated by Maven
> >>> #Mon Oct 10 10:44:31 EDT 2011
> >>> version=10.0.1
> >>> groupId=com.google.guava
> >>> artifactId=guava
> >>>
> >>> Thanks,
> >>>
> >>> Bhavesh
> >>>
> >>> On Tue, Nov 11, 2014 at 10:34 AM, Gwen Shapira 
> >>> wrote:
> >>>
>  In Sqoop we do the following:
> 
>  Maven runs a shell script, passing the version as a parameter.
>  The shell-script generates a small java class, which is then built
> >> with a
>  Maven plugin.
>  Our code references this generated class when we expose
> "getVersion()".
> 
>  Its complex and ugly, so I'm kind of hoping that there's a better way
> >> to
> >>> do
>  it :)
> 
>  Gwen
> 
> > On Tue, Nov 11, 2014 at 9:42 AM, Jun Rao  wrote:
> >
> > Currently, the version number is only stored in our build config
> >> file,
> > gradle.properties. Not sure how we can automatically extract it and
>  expose
> > it in an mbean. How do other projects do this?
> >
> > Thanks,
> >
> > Jun
> >
> > On Tue, Nov 11, 2014 at 7:05 AM, Otis Gospodnetic <
> > otis.gospodne...@gmail.com> wrote:
> >
> >> Hi Jun,
> >>
> >> Sounds good.  But is the version number stored anywhere from where
> >> it
> > could
> >> be gotten?
> >>
> >> Thanks,
> >> Otis
> >> --
> >> Monitoring * Alerting * Anomaly Detection * Centralized Log
> >>> Management
> >> Solr & Elasticsearch Support * http://sematext.com/
> >>
> >>
> >> On Tue, Nov 11, 2014 at 12:45 AM, Jun Rao 
> >> wrote:
> >>
> >>> Otis,
> >>>
> >>> We don't have an api for that now. We can probably expose this
> >> as a
>  JMX
> >> as
> >>> part of kafka-1481.
> >>>
> >>> Thanks,
> >>>
> >>> Jun
> >>>
> >>> On Mon, Nov 10, 2014 at 7:17 PM, Otis Gospodnetic <
> >>> otis.gospodne...@gmail.com> wrote:
> >>>
>  Hi,
> 
>  Is there a way to detect which version of Kafka one is running?
>  Is there an API for that, or a constant with this value, or
> >> maybe
>  an
> >>> MBean
>  or some other way to get to this info?
> 
>  Thanks,
>  Otis
>  --
>  Monitoring * Alerting * Anomaly Detection * Centralized Log
> > Management
>  Solr & Elasticsearch Support * http://sematext.com/
> >
> >
> >
> > --
> > -- Guozhang
>


Re: Programmatic Kafka version detection/extraction?

2014-11-12 Thread Gwen Shapira
Actually, Jun suggested exposing this via JMX.

On Wed, Nov 12, 2014 at 9:31 AM, Gwen Shapira  wrote:

> Good question.
>
> The server will need to expose this in the protocol, so Kafka clients will
> know what they are talking to.
>
> We may also want to expose this in the producer and consumer, so people
> who use Kafka's built-in clients will know which version they have in the
> environment.
>
>
>
> On Wed, Nov 12, 2014 at 9:09 AM, Mark Roberts  wrote:
>
>> Just to be clear: this is going to be exposed via some Api the clients
>> can call at startup?
>>
>>
>> > On Nov 12, 2014, at 08:59, Guozhang Wang  wrote:
>> >
>> > Sounds great, +1 on this.
>> >
>> >> On Tue, Nov 11, 2014 at 1:36 PM, Gwen Shapira 
>> wrote:
>> >>
>> >> So it looks like we can use Gradle to add properties to manifest file
>> and
>> >> then use getResourceAsStream to read the file and parse it.
>> >>
>> >> The Gradle part would be something like:
>> >> jar.manifest {
>> >>attributes('Implementation-Title': project.name,
>> >>'Implementation-Version': project.version,
>> >>'Built-By': System.getProperty('user.name'),
>> >>'Built-JDK': System.getProperty('java.version'),
>> >>'Built-Host': getHostname(),
>> >>'Source-Compatibility': project.sourceCompatibility,
>> >>'Target-Compatibility': project.targetCompatibility
>> >>)
>> >>}
>> >>
>> >> The code part would be:
>> >>
>> >>
>> this.getClass().getClassLoader().getResourceAsStream("/META-INF/MANIFEST.MF")
>> >>
>> >> Does that look like the right approach?
>> >>
>> >> Gwen
>> >>
>> >> On Tue, Nov 11, 2014 at 10:43 AM, Bhavesh Mistry <
>> >> mistry.p.bhav...@gmail.com
>> >>> wrote:
>> >>
>> >>> If is maven artifact then you will get following pre-build property
>> file
>> >>> from maven build called pom.properties under
>> >>> /META-INF/maven/groupid/artifactId/pom.properties folder.
>> >>>
>> >>> Here is sample:
>> >>> #Generated by Maven
>> >>> #Mon Oct 10 10:44:31 EDT 2011
>> >>> version=10.0.1
>> >>> groupId=com.google.guava
>> >>> artifactId=guava
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Bhavesh
>> >>>
>> >>> On Tue, Nov 11, 2014 at 10:34 AM, Gwen Shapira > >
>> >>> wrote:
>> >>>
>>  In Sqoop we do the following:
>> 
>>  Maven runs a shell script, passing the version as a parameter.
>>  The shell-script generates a small java class, which is then built
>> >> with a
>>  Maven plugin.
>>  Our code references this generated class when we expose
>> "getVersion()".
>> 
>>  Its complex and ugly, so I'm kind of hoping that there's a better way
>> >> to
>> >>> do
>>  it :)
>> 
>>  Gwen
>> 
>> > On Tue, Nov 11, 2014 at 9:42 AM, Jun Rao  wrote:
>> >
>> > Currently, the version number is only stored in our build config
>> >> file,
>> > gradle.properties. Not sure how we can automatically extract it and
>>  expose
>> > it in an mbean. How do other projects do this?
>> >
>> > Thanks,
>> >
>> > Jun
>> >
>> > On Tue, Nov 11, 2014 at 7:05 AM, Otis Gospodnetic <
>> > otis.gospodne...@gmail.com> wrote:
>> >
>> >> Hi Jun,
>> >>
>> >> Sounds good.  But is the version number stored anywhere from where
>> >> it
>> > could
>> >> be gotten?
>> >>
>> >> Thanks,
>> >> Otis
>> >> --
>> >> Monitoring * Alerting * Anomaly Detection * Centralized Log
>> >>> Management
>> >> Solr & Elasticsearch Support * http://sematext.com/
>> >>
>> >>
>> >> On Tue, Nov 11, 2014 at 12:45 AM, Jun Rao 
>> >> wrote:
>> >>
>> >>> Otis,
>> >>>
>> >>> We don't have an api for that now. We can probably expose this
>> >> as a
>>  JMX
>> >> as
>> >>> part of kafka-1481.
>> >>>
>> >>> Thanks,
>> >>>
>> >>> Jun
>> >>>
>> >>> On Mon, Nov 10, 2014 at 7:17 PM, Otis Gospodnetic <
>> >>> otis.gospodne...@gmail.com> wrote:
>> >>>
>>  Hi,
>> 
>>  Is there a way to detect which version of Kafka one is running?
>>  Is there an API for that, or a constant with this value, or
>> >> maybe
>>  an
>> >>> MBean
>>  or some other way to get to this info?
>> 
>>  Thanks,
>>  Otis
>>  --
>>  Monitoring * Alerting * Anomaly Detection * Centralized Log
>> > Management
>>  Solr & Elasticsearch Support * http://sematext.com/
>> >
>> >
>> >
>> > --
>> > -- Guozhang
>>
>
>


Re: Programmatic Kafka version detection/extraction?

2014-11-12 Thread Otis Gospodnetic
Right, this should should get exposed in JMX for:
* Producer
* Broker
* Consumer

Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/


On Wed, Nov 12, 2014 at 12:34 PM, Gwen Shapira 
wrote:

> Actually, Jun suggested exposing this via JMX.
>
> On Wed, Nov 12, 2014 at 9:31 AM, Gwen Shapira 
> wrote:
>
> > Good question.
> >
> > The server will need to expose this in the protocol, so Kafka clients
> will
> > know what they are talking to.
> >
> > We may also want to expose this in the producer and consumer, so people
> > who use Kafka's built-in clients will know which version they have in the
> > environment.
> >
> >
> >
> > On Wed, Nov 12, 2014 at 9:09 AM, Mark Roberts  wrote:
> >
> >> Just to be clear: this is going to be exposed via some Api the clients
> >> can call at startup?
> >>
> >>
> >> > On Nov 12, 2014, at 08:59, Guozhang Wang  wrote:
> >> >
> >> > Sounds great, +1 on this.
> >> >
> >> >> On Tue, Nov 11, 2014 at 1:36 PM, Gwen Shapira  >
> >> wrote:
> >> >>
> >> >> So it looks like we can use Gradle to add properties to manifest file
> >> and
> >> >> then use getResourceAsStream to read the file and parse it.
> >> >>
> >> >> The Gradle part would be something like:
> >> >> jar.manifest {
> >> >>attributes('Implementation-Title': project.name,
> >> >>'Implementation-Version': project.version,
> >> >>'Built-By': System.getProperty('user.name'),
> >> >>'Built-JDK': System.getProperty('java.version'),
> >> >>'Built-Host': getHostname(),
> >> >>'Source-Compatibility': project.sourceCompatibility,
> >> >>'Target-Compatibility': project.targetCompatibility
> >> >>)
> >> >>}
> >> >>
> >> >> The code part would be:
> >> >>
> >> >>
> >>
> this.getClass().getClassLoader().getResourceAsStream("/META-INF/MANIFEST.MF")
> >> >>
> >> >> Does that look like the right approach?
> >> >>
> >> >> Gwen
> >> >>
> >> >> On Tue, Nov 11, 2014 at 10:43 AM, Bhavesh Mistry <
> >> >> mistry.p.bhav...@gmail.com
> >> >>> wrote:
> >> >>
> >> >>> If is maven artifact then you will get following pre-build property
> >> file
> >> >>> from maven build called pom.properties under
> >> >>> /META-INF/maven/groupid/artifactId/pom.properties folder.
> >> >>>
> >> >>> Here is sample:
> >> >>> #Generated by Maven
> >> >>> #Mon Oct 10 10:44:31 EDT 2011
> >> >>> version=10.0.1
> >> >>> groupId=com.google.guava
> >> >>> artifactId=guava
> >> >>>
> >> >>> Thanks,
> >> >>>
> >> >>> Bhavesh
> >> >>>
> >> >>> On Tue, Nov 11, 2014 at 10:34 AM, Gwen Shapira <
> gshap...@cloudera.com
> >> >
> >> >>> wrote:
> >> >>>
> >>  In Sqoop we do the following:
> >> 
> >>  Maven runs a shell script, passing the version as a parameter.
> >>  The shell-script generates a small java class, which is then built
> >> >> with a
> >>  Maven plugin.
> >>  Our code references this generated class when we expose
> >> "getVersion()".
> >> 
> >>  Its complex and ugly, so I'm kind of hoping that there's a better
> way
> >> >> to
> >> >>> do
> >>  it :)
> >> 
> >>  Gwen
> >> 
> >> > On Tue, Nov 11, 2014 at 9:42 AM, Jun Rao 
> wrote:
> >> >
> >> > Currently, the version number is only stored in our build config
> >> >> file,
> >> > gradle.properties. Not sure how we can automatically extract it
> and
> >>  expose
> >> > it in an mbean. How do other projects do this?
> >> >
> >> > Thanks,
> >> >
> >> > Jun
> >> >
> >> > On Tue, Nov 11, 2014 at 7:05 AM, Otis Gospodnetic <
> >> > otis.gospodne...@gmail.com> wrote:
> >> >
> >> >> Hi Jun,
> >> >>
> >> >> Sounds good.  But is the version number stored anywhere from
> where
> >> >> it
> >> > could
> >> >> be gotten?
> >> >>
> >> >> Thanks,
> >> >> Otis
> >> >> --
> >> >> Monitoring * Alerting * Anomaly Detection * Centralized Log
> >> >>> Management
> >> >> Solr & Elasticsearch Support * http://sematext.com/
> >> >>
> >> >>
> >> >> On Tue, Nov 11, 2014 at 12:45 AM, Jun Rao 
> >> >> wrote:
> >> >>
> >> >>> Otis,
> >> >>>
> >> >>> We don't have an api for that now. We can probably expose this
> >> >> as a
> >>  JMX
> >> >> as
> >> >>> part of kafka-1481.
> >> >>>
> >> >>> Thanks,
> >> >>>
> >> >>> Jun
> >> >>>
> >> >>> On Mon, Nov 10, 2014 at 7:17 PM, Otis Gospodnetic <
> >> >>> otis.gospodne...@gmail.com> wrote:
> >> >>>
> >>  Hi,
> >> 
> >>  Is there a way to detect which version of Kafka one is running?
> >>  Is there an API for that, or a constant with this value, or
> >> >> maybe
> >>  an
> >> >>> MBean
> >>  or some other way to get to this info?
> >> 
> >>  Thanks,
> >>  Otis
> >>  --
> >>  Monitoring * Alerting * Anomaly Detection * Centralized

Re: Programmatic Kafka version detection/extraction?

2014-11-12 Thread Joel Koshy
+1 on the JMX + gradle properties. Is there any (seamless) way of
including the exact git hash? That would be extremely useful if users
need help debugging and happen to be on an unreleased build (say, off
trunk)

On Wed, Nov 12, 2014 at 09:34:35AM -0800, Gwen Shapira wrote:
> Actually, Jun suggested exposing this via JMX.
> 
> On Wed, Nov 12, 2014 at 9:31 AM, Gwen Shapira  wrote:
> 
> > Good question.
> >
> > The server will need to expose this in the protocol, so Kafka clients will
> > know what they are talking to.
> >
> > We may also want to expose this in the producer and consumer, so people
> > who use Kafka's built-in clients will know which version they have in the
> > environment.
> >
> >
> >
> > On Wed, Nov 12, 2014 at 9:09 AM, Mark Roberts  wrote:
> >
> >> Just to be clear: this is going to be exposed via some Api the clients
> >> can call at startup?
> >>
> >>
> >> > On Nov 12, 2014, at 08:59, Guozhang Wang  wrote:
> >> >
> >> > Sounds great, +1 on this.
> >> >
> >> >> On Tue, Nov 11, 2014 at 1:36 PM, Gwen Shapira 
> >> wrote:
> >> >>
> >> >> So it looks like we can use Gradle to add properties to manifest file
> >> and
> >> >> then use getResourceAsStream to read the file and parse it.
> >> >>
> >> >> The Gradle part would be something like:
> >> >> jar.manifest {
> >> >>attributes('Implementation-Title': project.name,
> >> >>'Implementation-Version': project.version,
> >> >>'Built-By': System.getProperty('user.name'),
> >> >>'Built-JDK': System.getProperty('java.version'),
> >> >>'Built-Host': getHostname(),
> >> >>'Source-Compatibility': project.sourceCompatibility,
> >> >>'Target-Compatibility': project.targetCompatibility
> >> >>)
> >> >>}
> >> >>
> >> >> The code part would be:
> >> >>
> >> >>
> >> this.getClass().getClassLoader().getResourceAsStream("/META-INF/MANIFEST.MF")
> >> >>
> >> >> Does that look like the right approach?
> >> >>
> >> >> Gwen
> >> >>
> >> >> On Tue, Nov 11, 2014 at 10:43 AM, Bhavesh Mistry <
> >> >> mistry.p.bhav...@gmail.com
> >> >>> wrote:
> >> >>
> >> >>> If is maven artifact then you will get following pre-build property
> >> file
> >> >>> from maven build called pom.properties under
> >> >>> /META-INF/maven/groupid/artifactId/pom.properties folder.
> >> >>>
> >> >>> Here is sample:
> >> >>> #Generated by Maven
> >> >>> #Mon Oct 10 10:44:31 EDT 2011
> >> >>> version=10.0.1
> >> >>> groupId=com.google.guava
> >> >>> artifactId=guava
> >> >>>
> >> >>> Thanks,
> >> >>>
> >> >>> Bhavesh
> >> >>>
> >> >>> On Tue, Nov 11, 2014 at 10:34 AM, Gwen Shapira  >> >
> >> >>> wrote:
> >> >>>
> >>  In Sqoop we do the following:
> >> 
> >>  Maven runs a shell script, passing the version as a parameter.
> >>  The shell-script generates a small java class, which is then built
> >> >> with a
> >>  Maven plugin.
> >>  Our code references this generated class when we expose
> >> "getVersion()".
> >> 
> >>  Its complex and ugly, so I'm kind of hoping that there's a better way
> >> >> to
> >> >>> do
> >>  it :)
> >> 
> >>  Gwen
> >> 
> >> > On Tue, Nov 11, 2014 at 9:42 AM, Jun Rao  wrote:
> >> >
> >> > Currently, the version number is only stored in our build config
> >> >> file,
> >> > gradle.properties. Not sure how we can automatically extract it and
> >>  expose
> >> > it in an mbean. How do other projects do this?
> >> >
> >> > Thanks,
> >> >
> >> > Jun
> >> >
> >> > On Tue, Nov 11, 2014 at 7:05 AM, Otis Gospodnetic <
> >> > otis.gospodne...@gmail.com> wrote:
> >> >
> >> >> Hi Jun,
> >> >>
> >> >> Sounds good.  But is the version number stored anywhere from where
> >> >> it
> >> > could
> >> >> be gotten?
> >> >>
> >> >> Thanks,
> >> >> Otis
> >> >> --
> >> >> Monitoring * Alerting * Anomaly Detection * Centralized Log
> >> >>> Management
> >> >> Solr & Elasticsearch Support * http://sematext.com/
> >> >>
> >> >>
> >> >> On Tue, Nov 11, 2014 at 12:45 AM, Jun Rao 
> >> >> wrote:
> >> >>
> >> >>> Otis,
> >> >>>
> >> >>> We don't have an api for that now. We can probably expose this
> >> >> as a
> >>  JMX
> >> >> as
> >> >>> part of kafka-1481.
> >> >>>
> >> >>> Thanks,
> >> >>>
> >> >>> Jun
> >> >>>
> >> >>> On Mon, Nov 10, 2014 at 7:17 PM, Otis Gospodnetic <
> >> >>> otis.gospodne...@gmail.com> wrote:
> >> >>>
> >>  Hi,
> >> 
> >>  Is there a way to detect which version of Kafka one is running?
> >>  Is there an API for that, or a constant with this value, or
> >> >> maybe
> >>  an
> >> >>> MBean
> >>  or some other way to get to this info?
> >> 
> >>  Thanks,
> >>  Otis
> >>  --
> >>  Monitoring * Alerting * Anomaly Detection * Centralized Log
> >> > Management
> >> 

Re: Programmatic Kafka version detection/extraction?

2014-11-12 Thread Mark Roberts
I haven't worked much with JMX before, but some quick googling (10-20
minutes) is very inconclusive as to how I would go about getting the server
version I'm connecting to from a Python client.  Can someone please
reassure me that it's relatively trivial for non Java clients to query JMX
for the server version of every server in the cluster? Is there a reason
not to include this in the API itself?

-Mark

On Wed, Nov 12, 2014 at 9:50 AM, Joel Koshy  wrote:

> +1 on the JMX + gradle properties. Is there any (seamless) way of
> including the exact git hash? That would be extremely useful if users
> need help debugging and happen to be on an unreleased build (say, off
> trunk)
>
> On Wed, Nov 12, 2014 at 09:34:35AM -0800, Gwen Shapira wrote:
> > Actually, Jun suggested exposing this via JMX.
> >
> > On Wed, Nov 12, 2014 at 9:31 AM, Gwen Shapira 
> wrote:
> >
> > > Good question.
> > >
> > > The server will need to expose this in the protocol, so Kafka clients
> will
> > > know what they are talking to.
> > >
> > > We may also want to expose this in the producer and consumer, so people
> > > who use Kafka's built-in clients will know which version they have in
> the
> > > environment.
> > >
> > >
> > >
> > > On Wed, Nov 12, 2014 at 9:09 AM, Mark Roberts 
> wrote:
> > >
> > >> Just to be clear: this is going to be exposed via some Api the clients
> > >> can call at startup?
> > >>
> > >>
> > >> > On Nov 12, 2014, at 08:59, Guozhang Wang 
> wrote:
> > >> >
> > >> > Sounds great, +1 on this.
> > >> >
> > >> >> On Tue, Nov 11, 2014 at 1:36 PM, Gwen Shapira <
> gshap...@cloudera.com>
> > >> wrote:
> > >> >>
> > >> >> So it looks like we can use Gradle to add properties to manifest
> file
> > >> and
> > >> >> then use getResourceAsStream to read the file and parse it.
> > >> >>
> > >> >> The Gradle part would be something like:
> > >> >> jar.manifest {
> > >> >>attributes('Implementation-Title': project.name,
> > >> >>'Implementation-Version': project.version,
> > >> >>'Built-By': System.getProperty('user.name'),
> > >> >>'Built-JDK': System.getProperty('java.version'),
> > >> >>'Built-Host': getHostname(),
> > >> >>'Source-Compatibility': project.sourceCompatibility,
> > >> >>'Target-Compatibility': project.targetCompatibility
> > >> >>)
> > >> >>}
> > >> >>
> > >> >> The code part would be:
> > >> >>
> > >> >>
> > >>
> this.getClass().getClassLoader().getResourceAsStream("/META-INF/MANIFEST.MF")
> > >> >>
> > >> >> Does that look like the right approach?
> > >> >>
> > >> >> Gwen
> > >> >>
> > >> >> On Tue, Nov 11, 2014 at 10:43 AM, Bhavesh Mistry <
> > >> >> mistry.p.bhav...@gmail.com
> > >> >>> wrote:
> > >> >>
> > >> >>> If is maven artifact then you will get following pre-build
> property
> > >> file
> > >> >>> from maven build called pom.properties under
> > >> >>> /META-INF/maven/groupid/artifactId/pom.properties folder.
> > >> >>>
> > >> >>> Here is sample:
> > >> >>> #Generated by Maven
> > >> >>> #Mon Oct 10 10:44:31 EDT 2011
> > >> >>> version=10.0.1
> > >> >>> groupId=com.google.guava
> > >> >>> artifactId=guava
> > >> >>>
> > >> >>> Thanks,
> > >> >>>
> > >> >>> Bhavesh
> > >> >>>
> > >> >>> On Tue, Nov 11, 2014 at 10:34 AM, Gwen Shapira <
> gshap...@cloudera.com
> > >> >
> > >> >>> wrote:
> > >> >>>
> > >>  In Sqoop we do the following:
> > >> 
> > >>  Maven runs a shell script, passing the version as a parameter.
> > >>  The shell-script generates a small java class, which is then
> built
> > >> >> with a
> > >>  Maven plugin.
> > >>  Our code references this generated class when we expose
> > >> "getVersion()".
> > >> 
> > >>  Its complex and ugly, so I'm kind of hoping that there's a
> better way
> > >> >> to
> > >> >>> do
> > >>  it :)
> > >> 
> > >>  Gwen
> > >> 
> > >> > On Tue, Nov 11, 2014 at 9:42 AM, Jun Rao 
> wrote:
> > >> >
> > >> > Currently, the version number is only stored in our build config
> > >> >> file,
> > >> > gradle.properties. Not sure how we can automatically extract it
> and
> > >>  expose
> > >> > it in an mbean. How do other projects do this?
> > >> >
> > >> > Thanks,
> > >> >
> > >> > Jun
> > >> >
> > >> > On Tue, Nov 11, 2014 at 7:05 AM, Otis Gospodnetic <
> > >> > otis.gospodne...@gmail.com> wrote:
> > >> >
> > >> >> Hi Jun,
> > >> >>
> > >> >> Sounds good.  But is the version number stored anywhere from
> where
> > >> >> it
> > >> > could
> > >> >> be gotten?
> > >> >>
> > >> >> Thanks,
> > >> >> Otis
> > >> >> --
> > >> >> Monitoring * Alerting * Anomaly Detection * Centralized Log
> > >> >>> Management
> > >> >> Solr & Elasticsearch Support * http://sematext.com/
> > >> >>
> > >> >>
> > >> >> On Tue, Nov 11, 2014 at 12:45 AM, Jun Rao 
> > >> >> wrote:
> > >> >>
> > >> >>> Otis,
> > >> >>>
> > >> >

Kafka in a docker container stops with no errors logged

2014-11-12 Thread Sybrandy, Casey
Hello,

We're using Kafka 0.8.1.1 and we're trying to run it in a Docker container.  
For the most part, this has been fine, however one of the containers has 
stopped a couple times and when I look at the log output from Docker (E.g. 
Kafka STDOUT), I don't see any errors.  At one point it states that the broker 
has started and several minutes later, I see the log messages stating that it's 
shutting down.

Has anyone seen anything like this before?  I don't know if Docker is the 
culprit as two other containers on different nodes don't seem to have any 
issues.

Thanks.

Re: Add partitions with replica assignment in same command

2014-11-12 Thread Allen Wang
I found this JIRA

https://issues.apache.org/jira/browse/KAFKA-1656

Now, we have to use two commands to accomplish the goal - first add
partitions using TopicCommand and then reassign replicas using
ReassignPartitionsCommand. When we first add partitions, will it change the
assignment of replicas for existing partitions? This is what we would like
to avoid. Also, will there be any issues executing the second reassignment
command which will change the assignment again for the new partitions added?




On Sun, Nov 9, 2014 at 9:01 PM, Jun Rao  wrote:

> Yes, it seems that we need to fix the tool to support that. It's probably
> more intuitive to have TopicCommand just take the replica-assignment (for
> the new partitions) when altering a topic. Could you file a jira?
>
> Thanks,
>
> Jun
>
> On Fri, Nov 7, 2014 at 4:17 PM, Allen Wang 
> wrote:
>
> > I am trying to figure out how to add partitions and assign replicas using
> > one admin command. I tried kafka.admin.TopicCommand to increase the
> > partition number from 9 to 12 with the following options:
> >
> > /apps/kafka/bin/kafka-run-class.sh kafka.admin.TopicCommand  --zookeeper
> > ${ZOOKEEPER} --alter --topic test_topic_4 --partitions 12
> > --replica-assignment 2:1,0:2,1:0,1:2,2:0,0:1,1:0,2:1,0:2,2:1,0:2,1:0
> >
> > This gives me an error
> >
> > Option "[replica-assignment]" can't be used with option"[partitions]"
> >
> > Looking into the TopicCommand, alterTopic function seems to be able to
> > handle that but the command exits with the above error before this
> function
> > is invoked.
> >
> > Is there any workaround or other recommended way to achieve this?
> >
> > Thanks,
> > Allen
> >
>


Broker keeps rebalancing

2014-11-12 Thread Chen Wang
Hi there,
My kafka client is reading a 3 partition topic from kafka with 3 threads
distributed on different machines. I am seeing frequent owner changes on
the topics when running:
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group
my_test_group --topic mytopic -zkconnect localhost:2181

The owner kept changing once a while, but I didn't see any exceptions
thrown from the consumer side. When checking broker log, its full of
 INFO Closing socket connection to /IP. (kafka.network.Processor)

Is this expected behavior? If so,  how can I tell when  the leader is
imbalanced, and rebalance is triggered?
Thanks,
Chen


Re: Getting Simple consumer details using MBean

2014-11-12 Thread Jun Rao
Those are for 0.7. In 0.8, you should see sth
like FetchRequestRateAndTimeMs in SimpleConsumer.

Thanks,

Jun

On Wed, Nov 12, 2014 at 5:14 AM, Madhukar Bharti 
wrote:

> Hi,
>
> I want to get the simple consumer details using MBean as described here
> <
> https://cwiki.apache.org/confluence/display/KAFKA/Operations#Operations-Monitoring
> >.
> But these bean names are not showing in JConsole as well as while trying to
> read from JMX.
>
> Please help me to get simple consumer details.
>
> I am using Kafka 0.8.1.1 version.
>
>
> Thanks and Regards,
> Madhukar Bharti
>


Re: Add partitions with replica assignment in same command

2014-11-12 Thread Neha Narkhede
When we first add partitions, will it change the
assignment of replicas for existing partitions?

Nope. It should not touch the existing partitions.

Also, will there be any issues executing the second reassignment
command which will change the assignment again for the new partitions added?

No. 2nd reassignment should work as expected.

On Wed, Nov 12, 2014 at 2:24 PM, Allen Wang 
wrote:

> I found this JIRA
>
> https://issues.apache.org/jira/browse/KAFKA-1656
>
> Now, we have to use two commands to accomplish the goal - first add
> partitions using TopicCommand and then reassign replicas using
> ReassignPartitionsCommand. When we first add partitions, will it change the
> assignment of replicas for existing partitions? This is what we would like
> to avoid. Also, will there be any issues executing the second reassignment
> command which will change the assignment again for the new partitions
> added?
>
>
>
>
> On Sun, Nov 9, 2014 at 9:01 PM, Jun Rao  wrote:
>
> > Yes, it seems that we need to fix the tool to support that. It's probably
> > more intuitive to have TopicCommand just take the replica-assignment (for
> > the new partitions) when altering a topic. Could you file a jira?
> >
> > Thanks,
> >
> > Jun
> >
> > On Fri, Nov 7, 2014 at 4:17 PM, Allen Wang 
> > wrote:
> >
> > > I am trying to figure out how to add partitions and assign replicas
> using
> > > one admin command. I tried kafka.admin.TopicCommand to increase the
> > > partition number from 9 to 12 with the following options:
> > >
> > > /apps/kafka/bin/kafka-run-class.sh kafka.admin.TopicCommand
> --zookeeper
> > > ${ZOOKEEPER} --alter --topic test_topic_4 --partitions 12
> > > --replica-assignment 2:1,0:2,1:0,1:2,2:0,0:1,1:0,2:1,0:2,2:1,0:2,1:0
> > >
> > > This gives me an error
> > >
> > > Option "[replica-assignment]" can't be used with option"[partitions]"
> > >
> > > Looking into the TopicCommand, alterTopic function seems to be able to
> > > handle that but the command exits with the above error before this
> > function
> > > is invoked.
> > >
> > > Is there any workaround or other recommended way to achieve this?
> > >
> > > Thanks,
> > > Allen
> > >
> >
>


Re: Broker keeps rebalancing

2014-11-12 Thread Neha Narkhede
Does this help?
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyaretheremanyrebalancesinmyconsumerlog
?

On Wed, Nov 12, 2014 at 3:53 PM, Chen Wang 
wrote:

> Hi there,
> My kafka client is reading a 3 partition topic from kafka with 3 threads
> distributed on different machines. I am seeing frequent owner changes on
> the topics when running:
> bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --group
> my_test_group --topic mytopic -zkconnect localhost:2181
>
> The owner kept changing once a while, but I didn't see any exceptions
> thrown from the consumer side. When checking broker log, its full of
>  INFO Closing socket connection to /IP. (kafka.network.Processor)
>
> Is this expected behavior? If so,  how can I tell when  the leader is
> imbalanced, and rebalance is triggered?
> Thanks,
> Chen
>


Re: Add partitions with replica assignment in same command

2014-11-12 Thread Jun Rao
If you add partitions, existing partitions' assignment won't change.
Changing the replica assignment subsequently should be fine.

Thanks,

Jun

On Wed, Nov 12, 2014 at 2:24 PM, Allen Wang 
wrote:

> I found this JIRA
>
> https://issues.apache.org/jira/browse/KAFKA-1656
>
> Now, we have to use two commands to accomplish the goal - first add
> partitions using TopicCommand and then reassign replicas using
> ReassignPartitionsCommand. When we first add partitions, will it change the
> assignment of replicas for existing partitions? This is what we would like
> to avoid. Also, will there be any issues executing the second reassignment
> command which will change the assignment again for the new partitions
> added?
>
>
>
>
> On Sun, Nov 9, 2014 at 9:01 PM, Jun Rao  wrote:
>
> > Yes, it seems that we need to fix the tool to support that. It's probably
> > more intuitive to have TopicCommand just take the replica-assignment (for
> > the new partitions) when altering a topic. Could you file a jira?
> >
> > Thanks,
> >
> > Jun
> >
> > On Fri, Nov 7, 2014 at 4:17 PM, Allen Wang 
> > wrote:
> >
> > > I am trying to figure out how to add partitions and assign replicas
> using
> > > one admin command. I tried kafka.admin.TopicCommand to increase the
> > > partition number from 9 to 12 with the following options:
> > >
> > > /apps/kafka/bin/kafka-run-class.sh kafka.admin.TopicCommand
> --zookeeper
> > > ${ZOOKEEPER} --alter --topic test_topic_4 --partitions 12
> > > --replica-assignment 2:1,0:2,1:0,1:2,2:0,0:1,1:0,2:1,0:2,2:1,0:2,1:0
> > >
> > > This gives me an error
> > >
> > > Option "[replica-assignment]" can't be used with option"[partitions]"
> > >
> > > Looking into the TopicCommand, alterTopic function seems to be able to
> > > handle that but the command exits with the above error before this
> > function
> > > is invoked.
> > >
> > > Is there any workaround or other recommended way to achieve this?
> > >
> > > Thanks,
> > > Allen
> > >
> >
>


Re: Getting Simple consumer details using MBean

2014-11-12 Thread Madhukar Bharti
Hi Jun Rao,

Thanks for your quick reply.

I am not able to see this  any bean named as "SimpleConsumer". Is there any
configuration related to this?

How can I see this bean named listing in Jconsole window?


Thanks and Regards
Madhukar

On Thu, Nov 13, 2014 at 6:06 AM, Jun Rao  wrote:

> Those are for 0.7. In 0.8, you should see sth
> like FetchRequestRateAndTimeMs in SimpleConsumer.
>
> Thanks,
>
> Jun
>
> On Wed, Nov 12, 2014 at 5:14 AM, Madhukar Bharti  >
> wrote:
>
> > Hi,
> >
> > I want to get the simple consumer details using MBean as described here
> > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/Operations#Operations-Monitoring
> > >.
> > But these bean names are not showing in JConsole as well as while trying
> to
> > read from JMX.
> >
> > Please help me to get simple consumer details.
> >
> > I am using Kafka 0.8.1.1 version.
> >
> >
> > Thanks and Regards,
> > Madhukar Bharti
> >
>



-- 
Thanks and Regards,
Madhukar Bharti
Mob: 7845755539


Re: 0.8.2 producer with 0.8.1.1 cluster?

2014-11-12 Thread Shlomi Hazan
I was asking to know if there's a point in trying...
>From your answer I understand the answer is yes.
10x,
Shlomi

On Wed, Nov 12, 2014 at 7:04 PM, Guozhang Wang  wrote:

> Shlomi,
>
> It should be compatible, did you see any issues using it against a 0.8.1.1
> cluster?
>
> Guozhang
>
> On Wed, Nov 12, 2014 at 5:43 AM, Shlomi Hazan  wrote:
>
> > Hi,
> > Is the new producer 0.8.2 supposed to work with 0.8.1.1 cluster?
> > Shlomi
> >
>
>
>
> --
> -- Guozhang
>


Re: first message is lost

2014-11-12 Thread Yonghui Zhao
Got it, thanks Guozhang.

On Thu Nov 13 2014 at 12:04:17 AM Guozhang Wang  wrote:

> Yonghui,
>
> If consumer is not set with --from-beginning, then this scenario is
> expected: KAFKA-1006 
>
> We are still figuring what is the best way to resolve this issue.
>
> Guozhang
>
> On Wed, Nov 12, 2014 at 12:35 AM, Yonghui Zhao 
> wrote:
>
> > Hi,
> >
> > For a non-existent topic,  the consumer and producer are set up.
> > Then if the producer sends the first message, producer gets this
> exception:
> >
> > [2014-11-12 16:24:28,041] WARN Error while fetching metadata
> > [{TopicMetadata for topic test5 ->
> > No partition metadata for topic test5 due to
> > kafka.common.LeaderNotAvailableException}] for topic [test5]: class
> > kafka.common.LeaderNotAvailableException
> >  (kafka.producer.BrokerPartitionInfo)
> > [2014-11-12 16:24:28,061] WARN Error while fetching metadata
> > [{TopicMetadata for topic test5 ->
> > No partition metadata for topic test5 due to
> > kafka.common.LeaderNotAvailableException}] for topic [test5]: class
> > kafka.common.LeaderNotAvailableException
> >  (kafka.producer.BrokerPartitionInfo)
> > [2014-11-12 16:24:28,062] ERROR Failed to collate messages by topic,
> > partition due to: Failed to fetch topic metadata for topic: test5
> > (kafka.producer.async.DefaultEventHandler)
> >
> >
> > And the consumer won't get this message.  But the second message and
> later
> > is ok.
> >
> > We don't change server.properties so auto.create.topics.enable should be
> > true.
> >
> > But if we set up consumer with --from-begin flag,  such like
> > bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test5
> > * --from-begin*
> > the exception is still there but consumer will get the first message.
> >
> > My question is how to avoid the loss.
> >
> > Thanks!
> >
>
>
>
> --
> -- Guozhang
>


Re: 0.8.2 producer with 0.8.1.1 cluster?

2014-11-12 Thread cac...@gmail.com
I used the 0.8.2 producer in a 0.8.1 cluster in a nonproduction
environment. No problems to report it worked great, but my testing at that
time was not particularly extensive for failure scenarios.

Christian

On Wed, Nov 12, 2014 at 10:37 PM, Shlomi Hazan  wrote:

> I was asking to know if there's a point in trying...
> From your answer I understand the answer is yes.
> 10x,
> Shlomi
>
> On Wed, Nov 12, 2014 at 7:04 PM, Guozhang Wang  wrote:
>
> > Shlomi,
> >
> > It should be compatible, did you see any issues using it against a
> 0.8.1.1
> > cluster?
> >
> > Guozhang
> >
> > On Wed, Nov 12, 2014 at 5:43 AM, Shlomi Hazan  wrote:
> >
> > > Hi,
> > > Is the new producer 0.8.2 supposed to work with 0.8.1.1 cluster?
> > > Shlomi
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>