Re: restoring broker node without Zookeeper data?

2014-08-22 Thread Jun Rao
No, currently there is no way to rebuild the ZK data from the broker. You
will need to backup ZK data too.

Thanks,

Jun


On Fri, Aug 22, 2014 at 9:26 PM, Ran RanUser  wrote:

> Running isolated primary, secondary, etc, Kafka brokers without replication
> (clients failover to other Kafka brokers if needed).  Each broker instance
> has a single local Zookeeper instance.
>
> Using periodic backups/snapshots of topic data across locations for
> disaster recovery.
>
> It appears without the Zookeeper data, we will be unable to restore a Kafka
> broker to another server even through we have all its topic data and logs.
> Is there a way for Zookeeper to auto bootstrap the topic data and high
> watermark ?  What is the procedure?  We are strictly using simple consumer.
>
>
> Thank you!
>


Re: read message use SimpleConsumer,can not start with assign offset

2014-08-22 Thread Jun Rao
What's code in line 164 of SimpleConsumerClient.getLastOffset()?

Thanks,

Jun


On Fri, Aug 22, 2014 at 7:36 AM, chenlax  wrote:

> kafka version is 0.8.0, the code as
> https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example
>
> when i assign a offset ,it will throw
> java.lang.ArrayIndexOutOfBoundsException: 0
>
> java.lang.ArrayIndexOutOfBoundsException: 0
> at
> com.chen.test.consumer.SimpleConsumerClient.getLastOffset(SimpleConsumerClient.java:164)
> at
> com.chen.test.consumer.SimpleConsumerClient.run(SimpleConsumerClient.java:80)
> at
> com.chen.test.consumer.SimpleConsumerClient.main(SimpleConsumerClient.java:48)
> ##EarliestTime : -2 || -1  The Start Offset : -2
> Error fetching data from the Broker:10.209.22.206 Reason: 1
>
> how can is get the message which is assign offset?
>
> Thanks,
> Lax
>


restoring broker node without Zookeeper data?

2014-08-22 Thread Ran RanUser
Running isolated primary, secondary, etc, Kafka brokers without replication
(clients failover to other Kafka brokers if needed).  Each broker instance
has a single local Zookeeper instance.

Using periodic backups/snapshots of topic data across locations for
disaster recovery.

It appears without the Zookeeper data, we will be unable to restore a Kafka
broker to another server even through we have all its topic data and logs.
Is there a way for Zookeeper to auto bootstrap the topic data and high
watermark ?  What is the procedure?  We are strictly using simple consumer.


Thank you!


Re: Kafka build for Scala 2.11

2014-08-22 Thread François Langelier
Balaji, look in the tags



François Langelier
Étudiant en génie Logiciel - École de Technologie Supérieure

Capitaine Club Capra 
VP-Communication - CS Games  2014
Jeux de Génie  2011 à 2014
Magistrat Fraternité du Piranha 
Comité Organisateur Olympiades ÉTS 2012
Compétition Québécoise d'Ingénierie 2012 - Compétition Senior


On Fri, Aug 22, 2014 at 3:00 PM, Seshadri, Balaji 
wrote:

> Joe,
>
> We don’t have pressing need to move to 2.11,we just wanted to upgrade to
> latest scala as our consumers are going to be in scala for our next release.
>
> Thanks,
>
> Balaji
>
> -Original Message-
> From: Joe Stein [mailto:joe.st...@stealth.ly]
> Sent: Friday, August 22, 2014 12:19 PM
> To: users@kafka.apache.org
> Subject: Re: Kafka build for Scala 2.11
>
> The changes are committed to trunk.  We didn't create the patch for
> 0.8.1.1 since there were code changes required and we dropped support for
> Scala 2.8 ( so we could just upload the artificats without a vote )
>
>
> https://issues.apache.org/jira/secure/attachment/12660369/KAFKA-1419_2014-08-07_10%3A52%3A18.patch
> is the version you want.
>
> If this is pressing for folks and can't wait for 0.8.2 or don't want to
> upgrade right away then doing a 0.8.1.2 release is an option...maybe some
> other things too...i.e. empty source jars.   I would prepare and vote on it
> if others would too.
>
> /***
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop
> /
> On Aug 22, 2014 1:03 PM, "Seshadri, Balaji" 
> wrote:
>
> > Hi Team,
> >
> > We are trying to compile 0.8.1.1 with Scala 2.11 and its giving me
> > compilation errors.
> >
> > Please let me know which patch should I apply from below JIRA.I tried
> > with latest one and it failed to apply.
> >
> > https://issues.apache.org/jira/browse/KAFKA-1419
> >
> > Thanks,
> >
> > Balaji
> >
>


Re: Please fix broken link and update blurb

2014-08-22 Thread Gwen Shapira
Done :)

On Fri, Aug 22, 2014 at 11:22 AM, Igal Levy  wrote:
> Could someone with edit rights in the wiki please fix the broken link for
> Metamarkets that's on this page:
>
> https://cwiki.apache.org/confluence/display/KAFKA/Powered+By
>
> The right URL is: http://metamarkets.com
>
> Also, if you could update the Metamarkets blurb to:
>
> "We use Kafka to ingest real-time event data, stream it to Storm and
> Hadoop, and then serve it from our Druid cluster to feed our interactive
> analytics dashboards. We've also built http://druid.io/docs/latest/Kafka-Eight.html";>connectors for directly
> ingesting events from Kafka into Druid.
>
> Thanks!
>
> Igal Levy
> Documentation | METAMARKETS
> m 925.487.9452 | t @textractor
> igal.l...@metamarkets.com


RE: Kafka build for Scala 2.11

2014-08-22 Thread Seshadri, Balaji
Joe,

We don’t have pressing need to move to 2.11,we just wanted to upgrade to latest 
scala as our consumers are going to be in scala for our next release.

Thanks,

Balaji

-Original Message-
From: Joe Stein [mailto:joe.st...@stealth.ly] 
Sent: Friday, August 22, 2014 12:19 PM
To: users@kafka.apache.org
Subject: Re: Kafka build for Scala 2.11

The changes are committed to trunk.  We didn't create the patch for 0.8.1.1 
since there were code changes required and we dropped support for Scala 2.8 ( 
so we could just upload the artificats without a vote )

https://issues.apache.org/jira/secure/attachment/12660369/KAFKA-1419_2014-08-07_10%3A52%3A18.patch
is the version you want.

If this is pressing for folks and can't wait for 0.8.2 or don't want to upgrade 
right away then doing a 0.8.1.2 release is an option...maybe some
other things too...i.e. empty source jars.   I would prepare and vote on it
if others would too.

/***
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop
/
On Aug 22, 2014 1:03 PM, "Seshadri, Balaji" 
wrote:

> Hi Team,
>
> We are trying to compile 0.8.1.1 with Scala 2.11 and its giving me 
> compilation errors.
>
> Please let me know which patch should I apply from below JIRA.I tried 
> with latest one and it failed to apply.
>
> https://issues.apache.org/jira/browse/KAFKA-1419
>
> Thanks,
>
> Balaji
>


RE: Kafka build for Scala 2.11

2014-08-22 Thread Seshadri, Balaji
I did not see 0.8.1.1 branch ,I see only 0.8.1 in github.

Can you please send me the git commands you used ?.

-Original Message-
From: Jonathan Weeks [mailto:jonathanbwe...@gmail.com] 
Sent: Friday, August 22, 2014 12:08 PM
To: users@kafka.apache.org
Subject: Re: Kafka build for Scala 2.11

I hand-applied this patch https://reviews.apache.org/r/23895/diff/ to the kafka 
0.8.1.1 branch and was able to build successfully:

gradlew -PscalaVersion=2.11.2 
-PscalaCompileOptions.useAnt=false releaseTarGz -x signArchives

I am testing the jar now, and will let you know if I can run 
producers/consumers against a vanilla 0.8.1.1 broker cluster with it...

-Jonathan

On Aug 22, 2014, at 11:02 AM, Seshadri, Balaji  wrote:

> Hi Team,
> 
> We are trying to compile 0.8.1.1 with Scala 2.11 and its giving me 
> compilation errors.
> 
> Please let me know which patch should I apply from below JIRA.I tried with 
> latest one and it failed to apply.
> 
> https://issues.apache.org/jira/browse/KAFKA-1419
> 
> Thanks,
> 
> Balaji



Re: Kafka build for Scala 2.11

2014-08-22 Thread Jonathan Weeks
+1 on a 0.8.1.2 release with support for Scala 2.11.x.

-Jonathan


On Aug 22, 2014, at 11:19 AM, Joe Stein  wrote:

> The changes are committed to trunk.  We didn't create the patch for 0.8.1.1
> since there were code changes required and we dropped support for Scala 2.8
> ( so we could just upload the artificats without a vote )
> 
> https://issues.apache.org/jira/secure/attachment/12660369/KAFKA-1419_2014-08-07_10%3A52%3A18.patch
> is the version you want.
> 
> If this is pressing for folks and can't wait for 0.8.2 or don't want to
> upgrade right away then doing a 0.8.1.2 release is an option...maybe some
> other things too...i.e. empty source jars.   I would prepare and vote on it
> if others would too.
> 
> /***
> Joe Stein
> Founder, Principal Consultant
> Big Data Open Source Security LLC
> http://www.stealth.ly
> Twitter: @allthingshadoop
> /
> On Aug 22, 2014 1:03 PM, "Seshadri, Balaji" 
> wrote:
> 
>> Hi Team,
>> 
>> We are trying to compile 0.8.1.1 with Scala 2.11 and its giving me
>> compilation errors.
>> 
>> Please let me know which patch should I apply from below JIRA.I tried with
>> latest one and it failed to apply.
>> 
>> https://issues.apache.org/jira/browse/KAFKA-1419
>> 
>> Thanks,
>> 
>> Balaji
>> 



Please fix broken link and update blurb

2014-08-22 Thread Igal Levy
Could someone with edit rights in the wiki please fix the broken link for
Metamarkets that's on this page:

https://cwiki.apache.org/confluence/display/KAFKA/Powered+By

The right URL is: http://metamarkets.com

Also, if you could update the Metamarkets blurb to:

"We use Kafka to ingest real-time event data, stream it to Storm and
Hadoop, and then serve it from our Druid cluster to feed our interactive
analytics dashboards. We've also built http://druid.io/docs/latest/Kafka-Eight.html";>connectors for directly
ingesting events from Kafka into Druid.

Thanks!

Igal Levy
Documentation | METAMARKETS
m 925.487.9452 | t @textractor
igal.l...@metamarkets.com


Re: Kafka build for Scala 2.11

2014-08-22 Thread Joe Stein
The changes are committed to trunk.  We didn't create the patch for 0.8.1.1
since there were code changes required and we dropped support for Scala 2.8
( so we could just upload the artificats without a vote )

https://issues.apache.org/jira/secure/attachment/12660369/KAFKA-1419_2014-08-07_10%3A52%3A18.patch
is the version you want.

If this is pressing for folks and can't wait for 0.8.2 or don't want to
upgrade right away then doing a 0.8.1.2 release is an option...maybe some
other things too...i.e. empty source jars.   I would prepare and vote on it
if others would too.

/***
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop
/
On Aug 22, 2014 1:03 PM, "Seshadri, Balaji" 
wrote:

> Hi Team,
>
> We are trying to compile 0.8.1.1 with Scala 2.11 and its giving me
> compilation errors.
>
> Please let me know which patch should I apply from below JIRA.I tried with
> latest one and it failed to apply.
>
> https://issues.apache.org/jira/browse/KAFKA-1419
>
> Thanks,
>
> Balaji
>


Re: Kafka build for Scala 2.11

2014-08-22 Thread Jonathan Weeks
I hand-applied this patch https://reviews.apache.org/r/23895/diff/ to the kafka 
0.8.1.1 branch and was able to build successfully:

gradlew -PscalaVersion=2.11.2 
-PscalaCompileOptions.useAnt=false releaseTarGz -x signArchives

I am testing the jar now, and will let you know if I can run 
producers/consumers against a vanilla 0.8.1.1 broker cluster with it...

-Jonathan

On Aug 22, 2014, at 11:02 AM, Seshadri, Balaji  wrote:

> Hi Team,
> 
> We are trying to compile 0.8.1.1 with Scala 2.11 and its giving me 
> compilation errors.
> 
> Please let me know which patch should I apply from below JIRA.I tried with 
> latest one and it failed to apply.
> 
> https://issues.apache.org/jira/browse/KAFKA-1419
> 
> Thanks,
> 
> Balaji



Kafka build for Scala 2.11

2014-08-22 Thread Seshadri, Balaji
Hi Team,

We are trying to compile 0.8.1.1 with Scala 2.11 and its giving me compilation 
errors.

Please let me know which patch should I apply from below JIRA.I tried with 
latest one and it failed to apply.

https://issues.apache.org/jira/browse/KAFKA-1419

Thanks,

Balaji


Re: doubt regarding High level consumer commiting offset

2014-08-22 Thread Guozhang Wang
Hello Arjun,

Currently there is no easy way to re-consume the message that has already
been returned to you that caused the exception; what you need to do is to
close the current consumer in handling the exception WITHOUT committing the
offset, fix the issue that caused the exception, and then restart the
consumer.

Guozhang


On Fri, Aug 22, 2014 at 7:59 AM, Arjun  wrote:

> The way i am doing this is as below
>
> First i get a map of topic to stream list
>   Map>> consumerMap =
> consumer.createMessageStreams(topicCountMap);
>
> From that i get the Stream list for my topic
>   List> stream = consumerMap.get(topic);
>
> I iterate on the list, for each stream i create a new thread and run in
> that thread
>
> In each thread i get the iterator for the stream
>  stream.iterator();
> Iterate on that stream
> while (it.hasNext()) {//Blocking call.
> MessageAndMetadata message =
> it.next();
> try {
> handleMessage(message);
> consumer.commitOffsets();
> } catch (Throwable e) {
> logger.error("Failed to process", e);
> }
> }
>
> While in 0.7, this used to get stuck when there is a error, we used to fix
> the thing and restart the app, then the processing went on fine. Is this
> not the case now? As explained earlier while testing we found out that the
> consumer is consuming next message and if next message gets processed well,
> then all the previous messages are gone. They are in the queue, but we
> don't even know the message offsets to get them and i just read we cant
> even get them even if we have the offsets.
>
> Please let me know weather my understanding is correct or not.
>
> Thanks
> Arjun Narasimha Kota
>
>
>
>
>
>
> On Friday 22 August 2014 04:48 PM, Arjun wrote:
>
>> Hi,
>>
>> I have My Kafka setup with 0.8.1 kafka with 3 producers and 3 consumers.
>> I use the high level consumer. My doubt is , i read the messages from the
>> consumer iterator, and after reading the message i process those messages,
>> if the processing throws any exception i am not commiting any offset.If not
>> then i am commiting the offset.
>>
>> Example : lets say i have 100 messages in the queue.
>>
>>   after 90 messages, the 91st message is read, but while processing(my
>> code), there is some exception and i havent commited the offset.
>>   Then in this case, if i say consumer iterator.next, will it give me the
>> next message i.e 92nd message or will not.
>>And if the 92nd message is serverd, and its processing is done
>> smoothly, and if i commit the offset, can i get the 91st message again?
>>
>>
>> thanks
>> Arjun Narasimha Kota
>>
>
>


-- 
-- Guozhang


Re: doubt regarding High level consumer commiting offset

2014-08-22 Thread Arjun

The way i am doing this is as below

First i get a map of topic to stream list
  Map>> consumerMap = 
consumer.createMessageStreams(topicCountMap);


From that i get the Stream list for my topic
  List> stream = consumerMap.get(topic);

I iterate on the list, for each stream i create a new thread and run in 
that thread


In each thread i get the iterator for the stream
 stream.iterator();
Iterate on that stream
while (it.hasNext()) {//Blocking call.
MessageAndMetadata message = 
it.next();

try {
handleMessage(message);
consumer.commitOffsets();
} catch (Throwable e) {
logger.error("Failed to process", e);
}
}

While in 0.7, this used to get stuck when there is a error, we used to 
fix the thing and restart the app, then the processing went on fine. Is 
this not the case now? As explained earlier while testing we found out 
that the consumer is consuming next message and if next message gets 
processed well, then all the previous messages are gone. They are in the 
queue, but we don't even know the message offsets to get them and i just 
read we cant even get them even if we have the offsets.


Please let me know weather my understanding is correct or not.

Thanks
Arjun Narasimha Kota





On Friday 22 August 2014 04:48 PM, Arjun wrote:

Hi,

I have My Kafka setup with 0.8.1 kafka with 3 producers and 3 
consumers. I use the high level consumer. My doubt is , i read the 
messages from the consumer iterator, and after reading the message i 
process those messages, if the processing throws any exception i am 
not commiting any offset.If not then i am commiting the offset.


Example : lets say i have 100 messages in the queue.

  after 90 messages, the 91st message is read, but while processing(my 
code), there is some exception and i havent commited the offset.
  Then in this case, if i say consumer iterator.next, will it give me 
the next message i.e 92nd message or will not.
   And if the 92nd message is serverd, and its processing is done 
smoothly, and if i commit the offset, can i get the 91st message again?



thanks
Arjun Narasimha Kota




read message use SimpleConsumer,can not start with assign offset

2014-08-22 Thread chenlax
kafka version is 0.8.0, the code as 
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+SimpleConsumer+Example

when i assign a offset ,it will throw java.lang.ArrayIndexOutOfBoundsException: 
0 

java.lang.ArrayIndexOutOfBoundsException: 0
at 
com.chen.test.consumer.SimpleConsumerClient.getLastOffset(SimpleConsumerClient.java:164)
at 
com.chen.test.consumer.SimpleConsumerClient.run(SimpleConsumerClient.java:80)
at 
com.chen.test.consumer.SimpleConsumerClient.main(SimpleConsumerClient.java:48)
##EarliestTime : -2 || -1  The Start Offset : -2
Error fetching data from the Broker:10.209.22.206 Reason: 1

how can is get the message which is assign offset?

Thanks,
Lax
  

doubt regarding High level consumer commiting offset

2014-08-22 Thread Arjun

Hi,

I have My Kafka setup with 0.8.1 kafka with 3 producers and 3 consumers. 
I use the high level consumer. My doubt is , i read the messages from 
the consumer iterator, and after reading the message i process those 
messages, if the processing throws any exception i am not commiting any 
offset.If not then i am commiting the offset.


Example : lets say i have 100 messages in the queue.

  after 90 messages, the 91st message is read, but while processing(my 
code), there is some exception and i havent commited the offset.
  Then in this case, if i say consumer iterator.next, will it give me 
the next message i.e 92nd message or will not.
   And if the 92nd message is serverd, and its processing is done 
smoothly, and if i commit the offset, can i get the 91st message again?



thanks
Arjun Narasimha Kota